Team Status Report for 04/27

Schedule

Now that we are in the final stretch, we need to test the full system, and continue working on our tasks that involve drawing the properly accurate rectangular red segments and fine tuning the frequency of our data sending/receiving now that we are not limited by it. We are working to improve the UI so that it is actually instructive and intuitive on demo day. Besides the technical improvements we need to make on our project, we need to start working on our presentation materials which include the final video, poster, and the written report.

Risks & Challenges

We currently have a little work left to accurately reflect the dirty and clean space on the AR side. The bounding boxes do not perfectly align yet, so we are still tuning the values within our AR app to ensure that the camera view matches the response on the iPhone’s app. Erin and Harshul are working to tune these parameters, and Nathalie is working on final touches with the UI. We still have a couple final tests to run, and we need to mount the active illumination to the system. However, we don’t anticipate any of these tasks to be blocking.

Validation: 

  1. Line Tracking Test
    Using masking take, create a straight, one meter long zig-zag section of four 90° bends which are each half a meter long. Using four 8.5″x11″ US Letter papers, print out a curved line section of the same width as the masking tape, and assemble to create a contiguous line. Then drive the vacuum over the line and verify that the drawn line tracks the reference lines with smooth connectivity and no deviations of perpendicular distance by more than 5cm.
    Results: Verified that we are able to track a range of motions from straight to sharp curves. Instead of mocking this on paper, we used TechSpark’s robot tracking tape path, which had a similar geometry to what was outlined in the test
  2. Technical Requirements Tracking Test
    In our design report, we outlined a 5cm radius for the vacuum head’s suction. Upon setting that radius for the SCNLine, we should see that the lines drawn encompass the length of the vacuum head. Additionally, we also outlined a 0.5s latency requirement for the tracking motion. To test this, we move in a straight line for two meters and verify that upon stopping, the entire motion is captured on the AR application within 0.5 seconds and accurately reflects the movement.
    ResultsTracking is in real time and the line drawn is continuous with no stuttering. This test informed our decision to move to SCNLine rather than our previous naive implementation of segmented cylinders. 
  3. Plane Projection Verification
    We will draw an offset at 5cm plus the radius of the SCNLine from the floor plane. Then we will place the phone such that the camera is touching the floor and positioned perpendicular to it. Upon visual inspection, none of the drawn shape should touch or extend past this test plane.
    Results: Verified that this projection method works. It is reliable even after changing from a hit-test to a plane projection
  4. BLE Connection:
    Try to send dirty/clean messages thirty times. Find delay between each test. Consider the summary statistics of this data which we collect. Assuming the standard deviation is not large, we can focus more on the mean value. We hope for the mean value to be around 0.1s, which gives us fps=10. This may not be achievable given the constraints of BLE, but with these tests, we will be able to determine what our system can achieve
    Results: We ran tests and collected over two hundred different sample latency values. We have achieved the desired 0.1s latency. The bottleneck has now become the camera’s frame rate, which is 30fps.
  5. Dirt Detection
    Take ten photos each of: 1) Clean floor, 2) 
    Unclean floor, sparse dirt, and 3) Unclean floor, heavy dirt. Assert that the dirt detection module classifies all ten images of the clean floor as “clean”, and all twenty images of the unclean floor as “dirty”. If this is not met, we will need to further tune the thresholds of the dirt detection algorithm.
    Results: After running our script on two different algorithms, we achieved the following results. Based on these values, we have chosen the second script. 

    FPV 81.82% 0.00%
    FNV 0.00% 9.52%
    ACC 71.88% 93.75%
  6. Camera Bounding Box Test
    Set up the vacuum system on a table in TechSpark and create little red rods to delimit borders of its field of vision. By accurately determining these parameters, we refined the precise cropping and granularity in our image capturing.
    Results: We have now determined the correct parameters to use when cropping the camera field of view in our dirt detection algorithm.
  7. Jetson Mounting Tests
    Note that these tests were very prototype focused. We created cardboard cut outs with a knife to identify shape and stiffness requirements. We then cut the pieces out of both acrylic and plywood and tested to make sure that there was enough give in the components to be able to still function modularly. We tested our components using tape, hot glue and epoxy for mounting. The tape was flimsy and too vulnerable for jostling our main mount. We opted for hot glue and/or epoxy for mounting fixed components. The initial testing we did with the first mount iterations found that the wheels jostled the mount so we fabricated standoffs to account for this.
    Results: Camera configuration cropping and resolution was initially too narrow. We implemented design changes and updated the camera configuration to capture a bounding box more appropriately sized with the width of the vacuum.
  8. Camera Mounting
    Tested various camera angle configurations against the amount of dirt captured to identify optimal mounting orientation.
    Results: We used this test to select our final mounting orientation for the Jetson camera.

  9. Battery Life Test
    We start the Jetson from a full charge, and run the entire end-to-end system for five minutes. After the five minutes are up, we check the charge on the Jetson. A charge depletion of less than 2.08% would indicate a successful output.
    Results: Over a span of thirty minutes, the power bank battery was only depleted around 10% (across four independent trials). This satisfies our requirements for battery life. 

Team Status Report for 4/20

Risks & Challenges

Our efforts are directed towards sketching out the hardware, involving laser cutting acrylic for mounts. This is a potential risk because the mounting hardware may not actually act as intended, in which case we would need to come up with a new hardware approach with different materials, like 3D designing a mount and then gorilla gluing it to mount. Luckily, the hardware components are not extensive because we only need to mount a small camera and two lights. Nathalie’s research led to consideration of SCNPath for indicating the covered area. However, this has challenges in modifying attributes like color retroactively based on real-time data and keeping lines in memory as it is with SCNLine. She has been working on path coloration logic, but this is dependent on the process of integrating it with Bluetooth message reception. Issues with the Bluetooth include its reliability and speed. Erin has been working on attempting to speedup the BLE data transmission. Unfortunately, it may be bottlenecked, and we are unsure if we are able to get any additional speedup beyond what our current system offers. Our biggest challenge is time with the final demo looming, so effective prioritization of tasks is super important.

Schedule

Thankfully, we are back on track this week after figuring out all the issues with the BLE transmission. We have mounted the AR detection image, as well as the Jetson camera. The Jetson computing source has yet to be fully attached to the system, as we are keeping it independent in the case that we need to continue development. The phone mount has also been attached to the vacuum system. We are currently working on final stages of testing and verification, as well as the final presentation, demo, and poster.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Tools used: laser cutting, material management, setting objectives for each meeting

Learning strategies: constant communication, goal setting for each meeting so that we knew what needed to be accomplished in the work sessions, selecting a Plan B/C alternative in case our original plan doesn’t work

We’ve encountered various challenges that required us to learn new tools and knowledge to overcome them. Most of our ECE experience has been with software, so we had to learn and coordinate designing mounts using laser cutting materials of acrylic and wood. Something else we learnt was how to handle Apple documentation about ARKit and how to handle the official documentation/implementations alongside the developer community’s add-on solutions ARKit. SCNPath and SCNLine are created by a freelance developer before (maxfrazer) before he began working for Apple.

To acquire these new skills and technical knowledge, we relied on constant communication and goal setting for our meetings. We made sure to discuss our progress and challenges regularly, setting clear objectives for each meeting to ensure that we were aligned on what needed to be accomplished. That way, we could prioritize which  tasks were related to our system integration, and then allow feature improvements to occur asynchronously on our own time.

Team Status Report for 4/6

Risks

One large risk that our group is currently facing is the fact that Erin is currently dealing with a number of issues regarding the Jetson. This is a large blocker for the entire end-to-end system, as we are unable to demonstrate whether the dirt tracking on the AR application is working properly if the entire Jetson subsystem is still offline. Without the dirt detection functioning, as well as the BLE connection, the AR system does not have the necessary data to determine whether the flooring is clean or dirty, and we will have no way of validating whether the transformation of 3D data points that we have on the AR side is accurate. Moreover, Erin is currently investigating whether it is even possible to speed up the Bluetooth data transmission. Currently, it seems that an async sleep call for around ten seconds is necessary in order to preserve functionality. This, along with the BLE data transmission limit, may force us to readjust our use-case requirements. 

Nathalie and Harshul have been working on tracking the vacuum head in the AR space, and they have also been working on trying to get the coordinate transformation correct. While the coordinate transformations have a dependency on the Jetson subsystem (as mentioned above), the vacuum head tracking does not, and we have made significant progress on that front. 

Nathalie has also been working on mounting the phone to the physical vacuum. We purchased a phone stand rather than designing our own mounting system, which saved us time. However, the angle which the stand is able to accommodate for may not be enough for the iPhone to get a satisfactory read, and this is something we plan to test more extensively, so we can figure out what the best orientation and mounting process for the iPhone would be. 

System Validation:

From our design report and incorporating Professor Kim’s feedback outlined an end-to-end validation test that he would like to see from our System below is the test plan for formalizing and carrying out this test.

The goal is to have every subsystem operational and test connectivity and integration between each subsystem.

Subsystems:

  1. Bluetooth (Jetson)
  2. Dirt Detection (Jetson)
  3. Plane Projection + Image Detection (ARKit)
  4. Plane Detection + Ui
  5. Bluetooth (ARkit)
  6. Time:position queue (ARKit)

With the room mapped with an initial mapping and the plane frozen, place the phone into the mount and start the drawing to track the vacuum position. Verify that the image is detected and the drawn line is behind the vacuum and that the queue is being populated with time:position points (Tests: 3,4,5,6)

Place a large, visible object behind the vacuum head in view of the active illumination devices and the Jetson camera. Verify that the dirt detection script categorizes the image as “dirty”, and proceed to validate that this message is sent from the Jetson to the working iPhone device. Additionally, validate that the iPhone has received the intended message from the Jetson, and then proceed to verify that the AR application highlights the proper portion of flooring (containing the large object). (Tests: 1, 2) 

Schedule

Our schedule has been unexpectedly delayed with the Jetson malfunctions this week, which has hindered the progress on the dirt detection front. Erin has been especially involved with this, and we are hoping to have it reliably resolved soon so that she can instead focus her energy on reducing the latency with regards to Bluetooth communication. Nathalie and Harshul have been making steady progress on the AR front, but it is absolutely crucial for each of our subsystems to have polished functionality so that we see more integration progress, especially with the hardware. We are mounting the Jetson this week(end) to measure the constant translational difference so we can add it to our code and do accompanying tests to ensure maximal precision. A challenge has been our differing free times in the day, but since we are working on integration testing between subsystem it is important that we all meet together and with the hardware components. To mitigate this, we set aside chunks of time on our calendars allotted to specific integration tests.

Team Status Report for 3/30

Risks

Our challenges currently lie with integrating the subsystems that we have all been working on in parallel. From our discussions this week, we have decided on data models for the output of Erin’s dirt detection algorithms which are the inputs to Nathalie and Harshul’s AR mapping algorithms. Current risks lie in establishing Bluetooth communication between the Jetson camera and the iPhone: we have set up the connection as receiving/sending and see the available device, but Apple’s black-box security measures prevent us from currently sending files. There have been developers that were able to circumvent this in the past, and so we are actively researching what methods they used. At the same time, we are actively exploring workarounds and have contingency plans in place. Options include employing web communication via HTTP requests or utilizing file read/write operations.

Other risks include potentially slow drawing functions when tracking the camera. Right now, there seems to be a lot of latency that impacts the usability of our system, so we are researching different methods in ARKit that can be used in a faster way. To address this, Nathalie is exploring alternatives such as utilizing the SCNLine module to potentially enhance performance. Similarly, Harshul is working on creating child nodes in a plane to see which is faster. We can always use GPU/CUDA if needed for additional speed up.

In addition, we have our main software components making progress but need to focus on how to design and mount hardware. This is a challenge because none of us have extensive experience in CAD or 3D printing, and we are in the process of deciding how to mount the hardware components (Jetson, camera, active illumination) such that it fits our ideal criteria (i.e. the camera needs to be mounted at the identified height and angle). Doing so earlier (immediately after the demos) will allow us to iterate through different hardware methods and try different mounts that we design to figure out what holds the most stability while not compromising image quality.

 

Schedule

In the coming week, we plan to flush out a plan for how to mount our hardware on our vacuum. We have already set up the Jetson such that it will be easy to fasten to the existing system, but the camera and its positioning are more challenging to engineer mounts for. In addition, the AR iPhone application is nearing the end of its development cycle, as we are moreso working on optimizations rather than core features. We are considering options for how to mount the iPhone as well. Nathalie has been working on how to pinpoint the location of the rear camera view based on the timestamps received from the Jetson. This may still need to be tweaked after we get the Bluetooth connection to be fully functional, as this is one of the main action items we have for the coming week.

Team Status Report for 3/23

Risks

With the augmented reality floor mapping base implementation done, we are about to move into marking/erasing the overlay. Nathalie and Harshul have discussed multiple implementation strategies for marking coverage, and are not entirely sure which approach will be most successful – this is something we will determine when working on it this week. Our initial thought is to combine the work that we have each done separately (Nathalie having mapped the floor and Harshul creating logic to change plane color on tap). Specifically, we want to add more nodes in a specific shape to the floor plane in a different color, like red, with the diameter of the shape equivalent to the width of the floor vacuum. Still, we need to figure out first how to do that, and once it works what shape would best capture the vacuum coverage dimensions. This is important because the visual representation of coverage is essential to our project actually working. As a fallback, we have experimented with the World Tracking Configuration logic which is able to capture our location in space and are willing to explore how our alternative approaches might work to solve the problem of creating visual indicators on a frozen floor map.

The core challenge is that upon freezing map updates we run the risk of odometry and drift of objects as we move around the room and tracking information changes, but doesn’t propagate to the actual planes drawn in the scene. However keeping the map dynamic mitigates this but then prevents consistency in the actual dimensions of our plane which make it difficult to measure and benchmark our coverage requirements. One mitigation method would be to have custom update renderers to avoid redefining plane boundaries but possibly allow their anchor position to change.

Another challenge that our group is currently facing is the accuracy of the mapping. While we addressed this issue before, the problem still stands. At this time, we have not been able to get the ARKit mappings to reflect the error rates that we desire, as specified by our use case requirements. This is due to the constraints of Apple’s hardware and software, and tuning these models may not be a viable option, giving the remaining time we have for the rest of the semester. Our group has discussed readjusting our error bounds in our use case requirements, and this is something we plan to flush out within the week.

We also need to get started on designing and productionizing all the hardware components we need in order to assemble our product end to end. The mounts for the Jetson hardware as well as the active illumination LEDs need to be custom made, which means that we may need to go through multiple iterations of the product before we are able to find a configuration that works well with our existing hardware. Since the turnaround is tight considering our interim demo is quickly approaching, we may not be able to demonstrate our project as an end-to-end product; rather, we may have to show it in terms of the components that we have already tested. 

Scheduling 

We are now one week away from the interim demo. The last AR core feature we need to do is plane erasure. We’ve successfully tracked the phone’s coordinates and drawing that in the scene. The next step is to project that data into the floor plane. This would leave the AR subsystem ready to demo. Since our camera positioning has been finalized, we are beginning to move forward with designing and 3D printing the mounting hardware. Next milestones will entail a user friendly integration of our app features as well as working on communication between the Jetson and the iPhone.



Team Report for 3/16

Risks

Nathalie and Harshul are working on projecting texture on a floor, with memory. Through demos and experimentation, it seems that the accuracy of the plane mapping depends on the user’s ability to perform the initial scanning—one major risk that could jeopardize the project would be the accuracy of the mapping, because both dirt detection and coverage metrics depend on their ability to appear on the map. In order to mitigate this, we are doing careful research during our implementation phases on how to best anchor the points, and performing tests in different kinds of rooms (blank rooms versus ones with lots of things on the floor) in order to validate our scope of a plain white floor. Erin is working on dirt detection and we found that our initial algorithm was sensitive to noise. We have created a new dirt detection algorithm which makes use of many of OpenCV’s built in preprocessing functions, rather than preprocessing the input images ourselves. While we originally thought the algorithms were very sensitive to noise, we have recently realized that this may be more of an issue with our image inputs. The new dirt detection algorithm is less sensitive to shade noise, but will still classify patterning to be dirty. Regardless, we hope to tune the algorithm to be less sensitive to noise, testing its performance accordingly at the desired height and angle.

The main risk encountered in working with plane texturing was the inaccuracies in how the plane was fitted to the plane boundaries. Since this is a core feature this is a top priority that we plan on addressing this coming week and are meeting as a team to develop a more robust approach. We also need this component to be finished in order to begin integration, which we presume will be a nontrivial task, especially since it involves all our hardware.

 

Next Steps

We are currently in the process of finalizing the ideal height and angle for the camera mount. Once that is decided, we will be thresholding and validating our definition of dirt with respect to the angle that was selected. Once dirt is detected, we will need to record the position to communicate with the augmented reality parts of our system. We still need to sync as a team on the type of information that the augmented reality algorithm would need to receive from the Jetson and camera. For the texture mapping on the floor, we are working on projecting the overlay with accuracy defined in our technical scope. Further, we will then be working on tracking motion of a specific object (which will represent the vacuum) in the context of the floor mapping. We hope to have a traceable line created on our overlay that indicates where we can erase parts of the map.

Team Status Report for 3/9

Team Status Report

What are the most significant risks that could jeopardize the success of the project? How are these risks being managed? What contingency plans are ready?

We believe that mounting the Jetson components could cause some unforeseen issues. We need to design hardware to hold the Jetson camera, and we may need to create another mount for the Jetson computer. The challenges we are facing include creating a system which is stable, yet does not inhibit access to any part of the computer or the camera. In addition, the computer mount should not cause the Jetson to overheat in any capacity; this adds a constraint on the mount design to ensure that the heatsink is exposed to the air and has good airflow.

The proof of concepts have covered a good amount of our feature-set that we need but a key one that has not been accomplished yet is projecting a texture onto a surface and modifying the texture. We have made progress with mapping but the next step this coming week is to work on projecting a texture onto the floor plane and also exploring how we can modify the texture as we move. To mitigate this risk of the complexity of this task we have researched that there are many candidate approaches that we can experiment to find one that best fits our needs. We have 2 AR api’s from apple SceneKit and RealityKit and they both have support for projecting textures.We can create a shader, we could modify the texture in real time or we could create another scene node to occlude the existing texture on the plane. This will be a key action item going forward.

Were any changes made to the existing design of the system (requirements, block diagram, system spec, etc)? Why was this change necessary, what costs does the change incur, and how will these costs be mitigated going forward?

We had not previously considered mounting the Jetson computer at a higher point on the vacuum. This alteration will incur an extra cost to cover the extension cable for the Jetson camera. In addition, we previously only planned on using one of the active illumination light components that we purchased, but we have reconsidered using two active illumination lights. This will not induce any additional cost for us, as we had purchased two units to start. 

Schedule 

The schedule has not changed from prior weeks. Our subtasks remain assigned as follows: Erin on dirt detection, Harshul and Nathalie on augmented reality plane mapping & tracking. We are going to sync on progress and reassign tasks in the coming weeks.

ABET Considerations

Part A was written by Harshul, Part B was written by Nathalie Part C was written by Erin

Part A: Global Factors

Some key global factors that we are considering are Human-Centric Design and Technology-Penetration. In order to make this app accessible to the broadest customer base it is important to avoid any unnecessary complexity in the application and ensure that the app is intuitive to users and leverages built in apis for accessibility in different language modalities.  Additionally with respect to technology penetration we are keenly aware that AR and XR systems are still in the early stages of the product adoption curve which means that the cost for truly immersive AR solutions like headsets is quite high and not nearly as ubiquitous as technologies like smartphones with Apple having significant market share we felt that this would allow for greater access to and penetration of our app given the much lower barrier to entry. Additionally because our application is designed to use bluetooth and on device capabilities the app’s functionality will not be constrained if deployed in rural regions with inconsistent/reduced wireless connectivity.

Part B: Cultural Factors

When accounting for cultural factors, it’s important to consider what cleanliness means in certain cultures. There are different customs and traditions associated with cleaning, and different times of day + frequency that people perform their vacuuming duties. Further, we are assuming that our users already have access to a vacuum and an outlet, which might not necessarily be the case. For example, based on statistics from Electrolux, Koreans vacuum most frequently while Brazilians and Portuguese people statistically spend the longest time vaccuumming.

Similarly, different cultures have different standards for cleanliness and often hold different decor elements on their floor, which changes the augmented reality mappings in ways that we might not be able to anticipate. Our use case already limits most of these scenarios by specifying simplicity, but ultimately we want to still think about designing products for the practical world.

While our product’s design and use case doesn’t directly deal with religious symbolism or iconography, but we must being considerate of the fact that cleanliness has religious significance to certain cultures so it’s worth being mindful of that in any gamification features that we add to ensure that we are not being insensitive.

Part C: Environmental Factors

Our project takes into account environmental factors as we create only an additive product. All the components that we are building can be integrated into an existing vacuum design, and will not produce a considerable amount of waste product. We initially had intended to create an active illumination module using LEDs, but we decided to forego this idea, as creating a good, vision-safe illumination method would cost raw material—we would have to cycle though multiple iterations of the product, and the final solution we end with may not be as safe as an existing solution. As such, we settled for an already manufactured LED. We also recently discussed a method to mount the Jetson and its corresponding components to the vacuum module. One option that we are heavily exploring is a 3D printed solution. We can opt to use a recycled filament, as this would be more environmentally friendly, compared to some of the other raw material sources. Moreover, our project as a whole aims to aid the user in getting a faster, better, clean. It does not interfere with any other existing environmental factors in a negative way, and the charge needed to power our system is negligible compared to what a typical college student consumes on a daily basis.

Team Status Report for 2/24

We solved the previous issue of the Jetson not working and successfully managed to get a new one from inventory flashed with the OS and running. We performed dummy object detection experiments with particles on a napkin and observed a high false positive rate, which is a challenge that we are going to work on in the coming weeks. All three of us have successfully started onboarding with Swift. 

We changed our use case and technical requirements for cleanliness to measure the actual size of the dirt particles instead of the covered area because it was too vague. We realized that 15% coverage of an area doesn’t really have meaning in context and instead wanted to measure meaningful dirt particles, specifically those that are >1mm in diameter and within 10 cm of the camera. We have also created new battery life requirements for the vacuum such that it must be active for over 4 hours, and have performed the accompanying calculations for maH. We updated our block diagrams and general design to include a form of wireless power with batteries that we plan on ordering in the coming week. In addition, we discovered that developing with Xcode without a developer account/license means we can only work with a cable plugged into our phone. While this is fine for the stage of development we are currently in, we need to purchase at least one developer license so that we can deploy wirelessly. This is the only adjustment that impacted our budget; we did not make any other changes to the costs our project would incur. We do not foresee many more use case/system adjustments of this degree.

Our timeline has accounted for enough slack so that the schedule has remained unchanged, but we definitely need to stay on track before spring break. We managed to find a functioning Jetson which has allowed us to stay on track, which was our challenge from last week because we did not know what was the problem or how long we would be blocked on the Jetson for. Luckily this has resolved, but we still need to acquire the Apple Developer pack so that we can power the Jetson wirelessly. This week, one of our main focus points will be the room mapping—we want to soon get a dummy app running with ARKit which can detect the edges of a room. Another one of our frontrunner tasks would be to flush out the rest of our design document.



Team Status Report for 2/17

We recently encountered an issue that could jeopardize the success of our project—our Jetson appears to not be able to receive any serial input. We are currently troubleshooting and trying to figure out a workable solution to this problem without replacing the unit entirely, although we have accounted for the slack time that we may end up needing in case we do need to reorder the component. We will make sure to seek a replacement component from the ECE inventory.

Our original plan did not include a Jetson; we had originally planned to use a Raspberry Pi component. Over this past week, we made the decision to deviate from our plan, and we opted for the Jetson instead. The Jetson is CUDA accelerated, and one of our group members (Harshul) has existing experience working with the Jetson. In addition, we have swapped out the active illumination LED with an existing piece of technology. The considerations for this were mostly based on the fact that the component that we chose was affordable, and that we would have to spend a significant amount of time designing our module so that it would be safe for the human eye. 

Our schedule has not changed, but with the time that it has taken to perform setup and acquire working hardware was longer than expected which was accounted for in our slack time.  Going forward it is important for us to remain on task, and we will do so by setting more micro-deadlines between the larger deadlines that Capstone requires. It is also essential that we work in parallel with getting ARkit floor mapping working, so that the hardware issues we face are not blockers that cause delays in the schedule.

Continue reading “Team Status Report for 2/17”

Team Status Report 2/10

Challenges and Risk Mitigation: 

  • Scheduling
    • Our schedules result in there being some challenges in finding times when we are all free. In order to mitigate this we are planning on setting up an additional meeting outside of class at a time that we are free to ensure that we have some time to sync and have cross collaboration time.
  • Technical Expertise with Swift
    • None of us have direct programming experience with Swift, so there is a concern that this may impact development velocity. To mitigate this we plan to work on the software immediately to ensure that we develop familiarity with the Swift language and Apple APIs immediately
  • Estimation of technical complexity
    • This space doesn’t have very many existing projects and products within it which makes it challenging to estimate the technical complexity of some of our core features. To mitigate this we have defined our testing environment to minimize variability by having a monochrome floor with a smooth surface to minimize challenges with noise and tracking along with leveraging existing tooling like ARKit to offload some of the technical complexity that is beyond the scope of our 14 week project.
  • Hardware Tooling
    • Active illumination is something that we haven’t worked with previously. To mitigate this we plan on drawing on Dyson ’s existing approach of an ‘eye safe green laser’ being projected onto the floor and scaling back to a basic LED light/buying an off the market illumination device that illuminates the floor as an alternative. We have also formulated our use-case requirements to focus on particles visible to the human eye to reduce the dependency on active illumination to identify dirt.
    • Professor Kim brought up the fact that our rear-facing camera may get dirty. We do not know how much this will impact our project or how likely we are to encounter issues with this piece of hardware until we start to prototype and test the actual module.

Changes to the existing system:

From iterating on our initial idea we decided to decouple the computer vision and cleanliness detection component into a separate module to improve the separation of concerns of our system. This incurs an additional cost of purchasing hardware and some nominal compute to transmit this data to the phone, but it improves the feasibility of detecting cleanliness by separating this system from the LiDAR and camera of the phone which is mounted with a much higher vertical footprint off of the floor. By having this module we can place it directly behind the vacuum head and have a more consistent observation point.

Scheduling

Schedule is currently unchanged however based on our bill of materials that we form and timelines on ordering those parts we are prepared to reorder tasks on our gantt chart to mitigate downtime when waiting for hardware components