Nathalie’s Status Report 2/24

This week I worked on flushing out the technical requirements based on our initial use case requirements. I did research on the accuracy of ARKit and how that would impact our mapping and coverage use case requirements (resources listed below).

I changed the requirements for dirt detection to measure size instead of being based on a percentage in order to accurately describe what kind of dirt we are detecting. Instead of dirt being on “15% of the visible area”, I figured it was not descriptive enough of the actual metric that we wanted to quantify. From there (and using our experimental CV object detection tests), I decided that we should detect >1mm dirt particles within 10cm of the actual camera. I also did calculations for our new use case requirement on the battery life, specifically that our vacuum should be able to be powered for 4 hours without recharging. In terms of power, this translates to 16,000 maH on 10W mode, 8,000 maH on 5W mode, with the goal of being able to be used for multiple vacuuming instances. Sources include:

I also made sure to connect Xcode to my phone so that I would be able to test my code locally.

Next steps include being able to actually do dummy floor mappings on static rooms and pictures of rooms. I have no exposure to Swift, Xcode, and ARKit technologies so need to become familiar so that I can actually start tracking floor coverage and figuring out the limits of ARKit as a tool. I want to be able to test local programs on my phone, and experiment with the phone’s LIDAR sensor. I think I need to take tutorial lessons on Swift and XCode to start development which I anticipate taking a lot of time. Another task that I need to focus on is how to prevent false positives in the current object detection algorithm, or develop a threshold that’s good enough so that the slight creases on the surfaces are not considered noise that affect the final result.

We are on track but need to make sure that we are able to articulate our ideas in the design report and figure out how the technologies work because they are still relatively unfamiliar with us.

Harshul’s Status Report 2/24

This week I primarily worked on the design presentation we had to give and polishing the delivery with practice and ensuring that our technical requirements had a clear mapping to the use case definitions. On the presentation side I worked on formalizing the system architecture in specifying the general block diagram, hardware I/O diagram as well as the software stack. These diagrams define the interactions between our sensor module and the iPhone app which should lead to a clear interface. I filed an inventory request for a replacement Jetson and flashed it to our SD card, configured the Jetpack/Ubuntu distribution and, compiled and built the drivers for our Wifii and Bluetooth dongles to allow for wireless development over ssh rather than depending on ethernet which leaves it ready for benchmarking. I’ve started working on getting a demo of the AR mapping up and running in xcode and will continue to keep working on it going into next week and use those learnings to inform our final design in our report.

Some next steps for the report entail fleshing out the architecture for the xcode app itself and running the mockup Erin put together of the dirt detection algorithm on the video-stream on the Jetson to test that CUDA accelerated openCV can successfully process the images on the device meaning that we can then transmit a json message to the iPhone over bluetooth or whether we need to transmit our image explicitly. I will also work on getting an AR mapping demo to build and test it on my iPhone and then start experimenting with the api of ARKit. Our last parts order that we anticipate is a battery + potentially a dc regulator so that we can power the Jetson without needing wall power.

I think progress is generally on track. I think two key things are to ensure that we have a report that clearly specifies our overall design along with ensuring that progress has been made on the AR front as a foundation for us to build on top of once we return from spring break.

Nathalie’s Status Report for 2/17

I spent this week outlining which hardware parts were going to be needed for each functionality piece. We broke down our hardware deliverables into two parts, the ARKit coverage mapping and back facing camera. The back-facing camera has a lot more hardware components like the Jetson, a Jetson-compatible camera, and the LED light. I spent a lot of time figuring out which components were good for our project and doing research on actual specs. Then I placed the orders for our parts through the Purchase Forms. We were going to use a Raspberry Pi but the one we wanted already had priority on the ECE inventory list so had to pick something else, the Jetson.

I also installed Xcode and set up my developer environment, which I thought was going to be pretty quick but ran into some unexpected issues because Xcode requires a lot of disk space to download. I spent a few hours downloading a disk scanner so that I could delete things on my computer, backing it up to a hard drive, and then downloading the actual Xcode which was extremely slow. In the meantime, I was able to do some initial research on ARKit and how other people have mapped rooms/floors in the past. I found this room plan tool that Apple has developed and maybe we can leverage since we plan on doing a simpler task of mapping just the floor, but a 3D plan of the room is helpful to determining where the borders of the floor are. We can probably use this tool for our initial mapping of the room to develop our ‘memory’ feature. This floor plan anchor tool and mesh object detection and mapping is similar to what we want to accomplish when mapping the floor, especially when it comes to detecting which objects we don’t want to include. There is an ARPlaneAnchor.Classification.floor that represents a real-world floor, ground plane, or similar large horizontal surface.

Plan is currently still on schedule but there isn’t much slack left for these next parts so it’s really important that our subsequent tasks are done on time. 

Next steps include making sure that our Jetson is able to boot (and reordering if needed), plus taking static images of dirt on the floor and feeding it into a dirt detection model to fully flush out our thresholds and make sure that the object detection model is able to perform in our context. After that, we are going to use the Jetson to perform the object detection mapping.

Team Status Report for 2/17

We recently encountered an issue that could jeopardize the success of our project—our Jetson appears to not be able to receive any serial input. We are currently troubleshooting and trying to figure out a workable solution to this problem without replacing the unit entirely, although we have accounted for the slack time that we may end up needing in case we do need to reorder the component. We will make sure to seek a replacement component from the ECE inventory.

Our original plan did not include a Jetson; we had originally planned to use a Raspberry Pi component. Over this past week, we made the decision to deviate from our plan, and we opted for the Jetson instead. The Jetson is CUDA accelerated, and one of our group members (Harshul) has existing experience working with the Jetson. In addition, we have swapped out the active illumination LED with an existing piece of technology. The considerations for this were mostly based on the fact that the component that we chose was affordable, and that we would have to spend a significant amount of time designing our module so that it would be safe for the human eye. 

Our schedule has not changed, but with the time that it has taken to perform setup and acquire working hardware was longer than expected which was accounted for in our slack time.  Going forward it is important for us to remain on task, and we will do so by setting more micro-deadlines between the larger deadlines that Capstone requires. It is also essential that we work in parallel with getting ARkit floor mapping working, so that the hardware issues we face are not blockers that cause delays in the schedule.

Continue reading “Team Status Report for 2/17”

Erin’s Status Report for 2/17

This week I mainly worked on getting started with the dirt detection, and figuring out what materials to order. Initially, our group had wanted to use an LED for active illumination, and then run our dirt detection module on the resulting illuminated imagery, but I found some existing technology that we ended up purchasing which would take care of this for us. The issue with the original design was that we had to be careful about whether the LEDs that we were using were vision safe, thus bringing ethics into question as well as design facets that my group and I do not know enough about. Moreover, I have started working a little with Swift and Xcode. There were a few Swift tutorials that I have watched over the week, and I toyed around with some of the syntax in my local development environment. I have also started doing research on model selection for our dirt detection problem. This is an incredibly crucial component of our end-to-end system, as it plays a large part in how easily we would be able to achieve our use case goals. I have looked into the Apple Live Capture feature, as this is one route that I am exploring for dirt detection. The benefit of this is that it is native to Apple Vision, and so I should have no issue integrating this into our existing system. However, the downside is that this model is often used for object detection rather than dirt detection, and the granularity might be too small for the model to work. Another option I am currently considering is DeeplabV3 model. This model specializes in segmenting pixels within an image into different objects. For our use case, we just need to differentiate between the floor and essentially anything which is not the floor. If we are able to detect small particles as “different objects” than what is on the floor, we could move forward with some simple casing on the size of these such objects for our dirt detection. The aim is to experiment will all these models over the couple days, and settle on a model of choice by the end of the week. 

We are mostly on schedule, but we ran into some technical difficulties with the Jetson. We initially did not plan on using a Jetson; rather, this was a quick change of plans when we were choosing hardware, as the department was low in stock on Raspberry Pis and high in stock on Jetsons. The Jetson also has CUDA acceleration, which is good for our use case, since we are working with a processing intensive project. Our issue with the Jetson may cause delays on the side of end-to-end integration and is stopping Harshul from performing certain initial tests he wanted to run, but I am able to experiment with my modules independently  from the Jetson, and I am currently not blocked. In addition, since we have replaced the active illumination with an existing product component, we are ahead of schedule on that front!

In the next week, I (again) hope to have a model selected for dirt detection. I also plan to help my group mates writing the design document, which I foresee to be a pretty time consuming part of my tasks for next week. 

Harshul’s Status report 2/17

This week we worked to select and order parts to set up a developer environment to start prototyping our project’s components. On the hardware selection side with my experience of working with a Jetson in another capstone we opted to select to work with a Jetson instead of a Raspi to leverage the CUDA accelerated computer vision features of the Nano. I set up the Xcode + Reality kit SDK and configured my iphone for development mode to flash an AR Demo app onto it. I spent some time attempting to flash the Jetson with an os image and get it to boot up, but unfortunately this wasn’t working. After troubleshooting multiple SD Cards, different jetpackOS images and power cables the final remaining item to troubleshoot is to attempt to power the jetson directly with Jumper cables from a DC power supply. I tried to boot up the Jetson in headless mode over serial, but it did not create an entry in tty or cu. To actually access it over serial. While the jetson was unable to boot I did take the time to test the peripherals of the camera and wifi adapter on a separate working jetson to verify that those components work. After setting up my Xcode environment I’ve taken some time to do some research in the build process and APIs + conventions of the swift language to better prepare myself for developing in xcode.

 

Our Schedule is on track, but the dirt detection deadlines that depend on the Jetson to some degree are approaching soon so that’s going to become the highest priority action item once the design presentation is complete and submitted.

 

Next week’s deliverables involve getting a working Jetson up and running either by fixing the power solution or trying a replacement from ECE inventory to see if it’s a hardware fault. Working on RealityKit in Swift to create a demo app that can build a world map and ticking off the dirt detection prototype on the jetson.

Team Status Report 2/10

Challenges and Risk Mitigation: 

  • Scheduling
    • Our schedules result in there being some challenges in finding times when we are all free. In order to mitigate this we are planning on setting up an additional meeting outside of class at a time that we are free to ensure that we have some time to sync and have cross collaboration time.
  • Technical Expertise with Swift
    • None of us have direct programming experience with Swift, so there is a concern that this may impact development velocity. To mitigate this we plan to work on the software immediately to ensure that we develop familiarity with the Swift language and Apple APIs immediately
  • Estimation of technical complexity
    • This space doesn’t have very many existing projects and products within it which makes it challenging to estimate the technical complexity of some of our core features. To mitigate this we have defined our testing environment to minimize variability by having a monochrome floor with a smooth surface to minimize challenges with noise and tracking along with leveraging existing tooling like ARKit to offload some of the technical complexity that is beyond the scope of our 14 week project.
  • Hardware Tooling
    • Active illumination is something that we haven’t worked with previously. To mitigate this we plan on drawing on Dyson ’s existing approach of an ‘eye safe green laser’ being projected onto the floor and scaling back to a basic LED light/buying an off the market illumination device that illuminates the floor as an alternative. We have also formulated our use-case requirements to focus on particles visible to the human eye to reduce the dependency on active illumination to identify dirt.
    • Professor Kim brought up the fact that our rear-facing camera may get dirty. We do not know how much this will impact our project or how likely we are to encounter issues with this piece of hardware until we start to prototype and test the actual module.

Changes to the existing system:

From iterating on our initial idea we decided to decouple the computer vision and cleanliness detection component into a separate module to improve the separation of concerns of our system. This incurs an additional cost of purchasing hardware and some nominal compute to transmit this data to the phone, but it improves the feasibility of detecting cleanliness by separating this system from the LiDAR and camera of the phone which is mounted with a much higher vertical footprint off of the floor. By having this module we can place it directly behind the vacuum head and have a more consistent observation point.

Scheduling

Schedule is currently unchanged however based on our bill of materials that we form and timelines on ordering those parts we are prepared to reorder tasks on our gantt chart to mitigate downtime when waiting for hardware components

Erin’s Status Report for 2/10

This week I worked primarily on the proposal presentation that I gave in class. I spent time practicing and thinking about how to structure the presentation to make it more palatable. Additionally, prior to my Monday presentation, I spent a good amount of time trying to justify the error values that we were considering for our use case requirements. I also worked on narrowing the scope of our project—I tried to calibrate the amount of work our group would have to do to something I believe would be more achievable during the semester. Initially, we had wanted to our project to work on all types of flooring, but I realized that the amount of noise that could get picked up on surfaces such as carpet or wood may make our project too difficult. I also spent some time looking at existing vacuum products so that our use case requirements would make sense with the current state of the products which are on the market. I worked on devising a gantt chart (shown below) for our group as well. The gantt chart was aligned with the schedule of tasks that Nathalie created, and it also showed the planned progression of our project through the course of the fourteen week semester. Finally, I also looked into some of the existing computer vision libraries and read up on ARKit and RealityKit to familiarize myself with the technologies that our group will be working with in the near future. 

Our group is on track with respect to our gantt chart, although it is my wish that we stay slightly ahead of schedule. We plan on syncing soon and meeting up to figure out exactly what hardware we need to order. 

Within the next week, I hope to have an answer/better response to all the questions that were revealed to our group in response to our initial presentation. Furthermore, I hope to make headway on dirt detection, as that is the next planned task that I am running point on. This would start with getting the materials ordered, figuring out how our component may fit on a vacuum, and brainstorming any backup plan in case our initial plan for the dirt detection LED falls through.

Harshul’s Status Report 2/10

Harshul’s Status Report

This week we worked on the proposal presentation to flesh out our idea, define our scope, assess feasibility, and identify technical challenges. I worked on the Technical Challenges and Solution Approach sections for our presentation. This entailed translating our use case requirements and featureset into the technologies and tools that we would be using as well as coming up with a high level architecture of how subsystems of our device would fit together. 

The key challenges identified were:

  • Software Based Challenges
    • Object detection to create an accurate map of the floor ‘plane’
    • Erasing the map as we traverse the cleaning area
    • computer vision component that would detect particulates
  • Hardware based challenges
    • Combining AR data collected by the phone with the information from our cleanliness computer vision module 
    • Identifying how our hardware would communicate with the iPhone mounted on the vacuum

With these challenges in mind I researched into the features and capabilities of ARKit, RealityKit and CoreBluetooth to understand and identify how these technologies will solve these challenges and assist us in building our application and represent how they would fit together in the diagram below. We also spent time as a team justifying why leveraging Apples Existing AR toolchains rather than rolling our own implementation with ROS and Unity would be more aligned with the non functional requirements of form factor to a product that is easy to use coupled with the vertical integration of these tools in comparison to ROS, Unity etc having interoperable but completely separate libraries for each of these purposes along with keeping costs down purchasing LIDAR sensors and using the embedded lidar within our phones.

This progress is on track with the Gantt chart and next steps involve meeting as a team to order materials and working to spin up an Xcode environment to start iterating and experimenting with ARKit. This experimentation will familiarize ourselves with our technology stack and help us create a more detailed system & technical design.

Nathalie’s Status Report for 2/10

This week I outlined our use case requirements into 3 main categories: mapping coverage, cleanliness and tracking coverage. In order to quantify these metrics, I found an article entitled “A study on the use of ARKit to extract and geo-reference floor plans” which helped us inform the quantified numbers. More specifically, this study had ARKit deviations of ~10cm in small scale environments and 2m error in large scale environments. Rooms we clean are considered small-medium, roughly 5m x 4 m so error is ~30-40 cm, rounded up is about 10% error. The article can be found here. As a team, we spent a lot of time discussing how to justify our requirements and making sure that we were aligned on our idea and what we were going to use to validate it. I created a schedule of tasks (sample of some items shown in the screenshot below), detailing tasks, subtasks, duration, notes, and assignees. Our schedule is meant as a place to store notes and information, and helped in the creation of our Gantt chart.

I also created a testing plan that aligned with use case requirements and the schedule of tasks. I made sure to demonstrate how these tasks and tests will be done in parallel, and sequentially based on our established timeline. These tests include both unit tests for each part of our system and end-to-end tests for the whole system once we begin to integrate parts.

My progress so far is on schedule relative to the deadlines that we outlined during our proposal presentation. Based on our feedback from professors and TAs, my next steps are going to be ordering materials, outlining more concretely what our technical plan is going to be on the software and hardware side, and flushing out our thresholds for dirt.