Team Status Report for 4/6

Risks

One large risk that our group is currently facing is the fact that Erin is currently dealing with a number of issues regarding the Jetson. This is a large blocker for the entire end-to-end system, as we are unable to demonstrate whether the dirt tracking on the AR application is working properly if the entire Jetson subsystem is still offline. Without the dirt detection functioning, as well as the BLE connection, the AR system does not have the necessary data to determine whether the flooring is clean or dirty, and we will have no way of validating whether the transformation of 3D data points that we have on the AR side is accurate. Moreover, Erin is currently investigating whether it is even possible to speed up the Bluetooth data transmission. Currently, it seems that an async sleep call for around ten seconds is necessary in order to preserve functionality. This, along with the BLE data transmission limit, may force us to readjust our use-case requirements. 

Nathalie and Harshul have been working on tracking the vacuum head in the AR space, and they have also been working on trying to get the coordinate transformation correct. While the coordinate transformations have a dependency on the Jetson subsystem (as mentioned above), the vacuum head tracking does not, and we have made significant progress on that front. 

Nathalie has also been working on mounting the phone to the physical vacuum. We purchased a phone stand rather than designing our own mounting system, which saved us time. However, the angle which the stand is able to accommodate for may not be enough for the iPhone to get a satisfactory read, and this is something we plan to test more extensively, so we can figure out what the best orientation and mounting process for the iPhone would be. 

System Validation:

From our design report and incorporating Professor Kim’s feedback outlined an end-to-end validation test that he would like to see from our System below is the test plan for formalizing and carrying out this test.

The goal is to have every subsystem operational and test connectivity and integration between each subsystem.

Subsystems:

  1. Bluetooth (Jetson)
  2. Dirt Detection (Jetson)
  3. Plane Projection + Image Detection (ARKit)
  4. Plane Detection + Ui
  5. Bluetooth (ARkit)
  6. Time:position queue (ARKit)

With the room mapped with an initial mapping and the plane frozen, place the phone into the mount and start the drawing to track the vacuum position. Verify that the image is detected and the drawn line is behind the vacuum and that the queue is being populated with time:position points (Tests: 3,4,5,6)

Place a large, visible object behind the vacuum head in view of the active illumination devices and the Jetson camera. Verify that the dirt detection script categorizes the image as “dirty”, and proceed to validate that this message is sent from the Jetson to the working iPhone device. Additionally, validate that the iPhone has received the intended message from the Jetson, and then proceed to verify that the AR application highlights the proper portion of flooring (containing the large object). (Tests: 1, 2) 

Schedule

Our schedule has been unexpectedly delayed with the Jetson malfunctions this week, which has hindered the progress on the dirt detection front. Erin has been especially involved with this, and we are hoping to have it reliably resolved soon so that she can instead focus her energy on reducing the latency with regards to Bluetooth communication. Nathalie and Harshul have been making steady progress on the AR front, but it is absolutely crucial for each of our subsystems to have polished functionality so that we see more integration progress, especially with the hardware. We are mounting the Jetson this week(end) to measure the constant translational difference so we can add it to our code and do accompanying tests to ensure maximal precision. A challenge has been our differing free times in the day, but since we are working on integration testing between subsystem it is important that we all meet together and with the hardware components. To mitigate this, we set aside chunks of time on our calendars allotted to specific integration tests.

Nathalie’s Status Report for 4/6

I spent this week combining multiple features of the augmented reality component, specifically freezing the plane and tracking coverage metrics. In addition, I did initial scoping, researching and solidifying approaches for object detection, where picking a reference object that has a fixed distance with the back Jetson camera would allow us to locate the world coordinates of the Jetson camera at any given point. I performed initial testing of our floor mapping technology so that I could plan how to verify and validate our augmented reality subsystem going forward, making sure that we are able to track coverage and integrate our individual functionalities without compromising on latency.

Floor mapping with tracking based on camera middle point and present objects in a non-rectangular space

Integrating with physical components

As I have been working on the augmented reality app, I am familiar with how it works and got the opportunity to experiment with the placement of our components because we finally received the phone mount that we ordered. I spent the early parts of this week playing with object detection in order to orient a specific object within world coordinates, and needed some physical metrics in order to inform the approach that I am going to take with reference to object detection in the AR front. Essentially, mine and Harshul’s goal (and what we are currently working on) is to detect a specific object in the space which serves as a reference point on the front of the vacuum, from which we can map the constant translational difference from the front of the vacuum to the back where the Jetson camera exists. Initially, I had thought (and we expected) to mount the phone mount on the actual rod of the vacuum. When I actually assembled the components together and put the mount on the handle with the phone + AR app mapping the floor, I realized it was too close to the ground so wouldn’t provide a user-friendly aerial view that we had initially envisioned. From initial validation tests, the floor mapping technology works best when it has the most perspective, as it is able to interpret the space around it with context and meaning. Therefore, any rod positioning was not ideal due to the vacuum compartment – initially Erin was holding the vacuum for dirt detection functionality so I didn’t realize how just how big the compartment actually was. Another problem was that mounting the phone so low would essentially put it out of the field of view of the user, which essentially renders the whole subsystem useless.

When experimented with positioning, I then added the phone map to the handle. This was initially a hesitation of mine because I didn’t want the hand hole to be too small for someone to hold, but testing it out showed me that it wasn’t an issue. Actually, it was the perfect visual aid, within the user’s field of vision but not being too obstructive.

Ideal positioning of the phone mount on the vacuum. Compatible with multiple devices for maximal accessibility.

Object Detection

Handling the physical components led me to better understand what sorts of software challenges that we might be dealing with. Due to the change in the height of the phone and mount, an unexpected challenge that I realized was we would need to object detect at a further distance than initially thought. We are looking at what sort of object/shape is distinct at ~1.2m high, looking for a compromise between a smaller object for user friendliness and least obstruction possible, while also maintaining accuracy in our object detection models. In the below video, I detected a rectangle from 2m distance (further than we need) serving as a proof of concept that we can detect shapes/objects at distances further than we even need for this project. When watching it, you can observe the changing colors of the rectangle on the floor, which is the AR modifying/adding art to the rectangle that it detects. Since the object is far away, you might need to look closely, but you can see how the colors change indicating that the application recognizes the rectangle.

[VIDEO] Demo example of object detection at a distance

Verification + Validation

In terms of subsystem testing, Harshul and I worked on designing tests for the AR side of our subsystem. We designed our tests together, but decided to act as project manager of different subsystems within the AR component. Specifically, I’m responsible for testing the floor mapping classification validation, checking that the frozen map area with reference to the real world.

In a purely rectangular space, I am going to test the map’s ability to cover the real area including the corners. Specifically I am going to test accuracy of the mapping when changing degrees of freedom that the camera accesses. The use case requirement I am testing is:

Accuracy in initial mapping boundary detection ±10 cm  (per our design report + presentations)

Measurements & Calculations

What I’m measuring is the mapped area (the mesh area is a property of itself and can be determined programmatically through ARKit) versus the area of the real floor, which I’ll be measuring manually by performing a Width x Height calculation. I’ll use the percentage difference between the two as a measure of accuracy, and perform calculations to make sure that the mapped area falls within +- 10 cm of the outer borders. Specifically, I’m going to do 3 tests for mapping the floor area and performing calculations in each of these scenarios. I will be mapping the floor by pointing the camera for 5 seconds each in the following ways:

  1. Parallel to the floor
  2. Perpendicular to the floor with 90º horizontal rotation allowed
  3. Perpendicular to the floor with 180º horizontal rotation

As such, I will have represented many of the user-mapped scenarios and performing accompanying calculations for each, making sure that it fits within our use case requirement.

Schedule & Next Steps

At the beginning of this week on the day of demos, our Jetson broke. Erin has been dealing with this issue extensively, and it has been an unexpected challenge that we have been trying really hard to work through, by reflashing the OS, starting everything over again, reinstalling absolutely everything – we have been having issues installing initial packages due to all the dependencies. The broken Jetson put us behind schedule because I was not able to test the latency of the Bluetooth communication from the Jetson so that part got pushed to this week which was unexpected but our #1 priority going forward. While continuing to improve the object detection technology with Harshul, I also will be working on two main aspects of our tracking system (1) making sure that we cannot trace outside of the plane, and enlarging the area of the drawn lines, whether it be by increasing the radius or making it rectangular by adding planes instead. We don’t have a lot of slack time left so need to be constantly making progress to allow for proper integration tests and validation as we look forward to our final demo.