Team Status Report for 04/27

Schedule

Now that we are in the final stretch, we need to test the full system, and continue working on our tasks that involve drawing the properly accurate rectangular red segments and fine tuning the frequency of our data sending/receiving now that we are not limited by it. We are working to improve the UI so that it is actually instructive and intuitive on demo day. Besides the technical improvements we need to make on our project, we need to start working on our presentation materials which include the final video, poster, and the written report.

Risks & Challenges

We currently have a little work left to accurately reflect the dirty and clean space on the AR side. The bounding boxes do not perfectly align yet, so we are still tuning the values within our AR app to ensure that the camera view matches the response on the iPhone’s app. Erin and Harshul are working to tune these parameters, and Nathalie is working on final touches with the UI. We still have a couple final tests to run, and we need to mount the active illumination to the system. However, we don’t anticipate any of these tasks to be blocking.

Validation: 

  1. Line Tracking Test
    Using masking take, create a straight, one meter long zig-zag section of four 90° bends which are each half a meter long. Using four 8.5″x11″ US Letter papers, print out a curved line section of the same width as the masking tape, and assemble to create a contiguous line. Then drive the vacuum over the line and verify that the drawn line tracks the reference lines with smooth connectivity and no deviations of perpendicular distance by more than 5cm.
    Results: Verified that we are able to track a range of motions from straight to sharp curves. Instead of mocking this on paper, we used TechSpark’s robot tracking tape path, which had a similar geometry to what was outlined in the test
  2. Technical Requirements Tracking Test
    In our design report, we outlined a 5cm radius for the vacuum head’s suction. Upon setting that radius for the SCNLine, we should see that the lines drawn encompass the length of the vacuum head. Additionally, we also outlined a 0.5s latency requirement for the tracking motion. To test this, we move in a straight line for two meters and verify that upon stopping, the entire motion is captured on the AR application within 0.5 seconds and accurately reflects the movement.
    ResultsTracking is in real time and the line drawn is continuous with no stuttering. This test informed our decision to move to SCNLine rather than our previous naive implementation of segmented cylinders. 
  3. Plane Projection Verification
    We will draw an offset at 5cm plus the radius of the SCNLine from the floor plane. Then we will place the phone such that the camera is touching the floor and positioned perpendicular to it. Upon visual inspection, none of the drawn shape should touch or extend past this test plane.
    Results: Verified that this projection method works. It is reliable even after changing from a hit-test to a plane projection
  4. BLE Connection:
    Try to send dirty/clean messages thirty times. Find delay between each test. Consider the summary statistics of this data which we collect. Assuming the standard deviation is not large, we can focus more on the mean value. We hope for the mean value to be around 0.1s, which gives us fps=10. This may not be achievable given the constraints of BLE, but with these tests, we will be able to determine what our system can achieve
    Results: We ran tests and collected over two hundred different sample latency values. We have achieved the desired 0.1s latency. The bottleneck has now become the camera’s frame rate, which is 30fps.
  5. Dirt Detection
    Take ten photos each of: 1) Clean floor, 2) 
    Unclean floor, sparse dirt, and 3) Unclean floor, heavy dirt. Assert that the dirt detection module classifies all ten images of the clean floor as “clean”, and all twenty images of the unclean floor as “dirty”. If this is not met, we will need to further tune the thresholds of the dirt detection algorithm.
    Results: After running our script on two different algorithms, we achieved the following results. Based on these values, we have chosen the second script. 

    FPV 81.82% 0.00%
    FNV 0.00% 9.52%
    ACC 71.88% 93.75%
  6. Camera Bounding Box Test
    Set up the vacuum system on a table in TechSpark and create little red rods to delimit borders of its field of vision. By accurately determining these parameters, we refined the precise cropping and granularity in our image capturing.
    Results: We have now determined the correct parameters to use when cropping the camera field of view in our dirt detection algorithm.
  7. Jetson Mounting Tests
    Note that these tests were very prototype focused. We created cardboard cut outs with a knife to identify shape and stiffness requirements. We then cut the pieces out of both acrylic and plywood and tested to make sure that there was enough give in the components to be able to still function modularly. We tested our components using tape, hot glue and epoxy for mounting. The tape was flimsy and too vulnerable for jostling our main mount. We opted for hot glue and/or epoxy for mounting fixed components. The initial testing we did with the first mount iterations found that the wheels jostled the mount so we fabricated standoffs to account for this.
    Results: Camera configuration cropping and resolution was initially too narrow. We implemented design changes and updated the camera configuration to capture a bounding box more appropriately sized with the width of the vacuum.
  8. Camera Mounting
    Tested various camera angle configurations against the amount of dirt captured to identify optimal mounting orientation.
    Results: We used this test to select our final mounting orientation for the Jetson camera.

  9. Battery Life Test
    We start the Jetson from a full charge, and run the entire end-to-end system for five minutes. After the five minutes are up, we check the charge on the Jetson. A charge depletion of less than 2.08% would indicate a successful output.
    Results: Over a span of thirty minutes, the power bank battery was only depleted around 10% (across four independent trials). This satisfies our requirements for battery life. 

Erin’s Status Report for 04/27

This past week I have been working on the final steps of integration for the dirt detection subsystem. This included both speeding up the transmission interval between BLE messages, and fixing the way the data was represented on the AR application once the dirt detection data was successfully transmitted from the Jetson to the iPhone.
The BLE transmission was initially incredibly slow, but last week, I managed to increase the transmission rate significantly. By this time last week, we had messages sending at around every 5.5 seconds, and I had conducted a series of tests to determine what the statistical average was for this delay. Our group, however, had known that the capability of BLE transmission far exceeded the results that we were getting. This week, I timed the speed of the different components within the BLE script. The script which runs on the Jetson can be broken down into two parts: 1) the dirt detection component, and 2) the actual message serialization and transmission. The dirt detection component was tricky to integrate into the BLE script because each script relies on a different Python version. Since the dependencies for these script did not match (and I was unable to resolve these dependency issues after two weeks of research and testing), I had resorted to running one script as a subprocess within the other. After timing the subcomponents within the overall script, I found that the dirt detection was the component which was causing the longest delay. I had also discovered that sending the data over BLE to the iPhone took just over a millisecond. I continued by timing each separate component within the dirt detection script. At first glance, there was no issue, as the script ran pretty quickly when started from the command line, but the delay in opening the camera was what caused the script to be running incredibly slow. I tried to mitigate this issue by returning an object in the script to the outer process which was calling it, but this did not make sense as the data could only be read as serial data, and the dependencies would not have matched to be able to handle an object of that type. Harshul actually came up with an incredibly clever solution—he proposed use the command line to pipe in an argument instead. Since the subprocess function from Python effectively takes in command line arguments and executes them, we would pipe in a newline character each time we wanted to query another image from the script. This took very little refactoring on my end, and we have now sped up the script to be able to send images as fast as we would need. Now, the bottleneck is the frame rate of the CSI camera, which is only 30FPS, but our script can now (in theory) handle sending around 250 messages per second.
Something else I worked on in this past week was allowing the dirt detection data to be successfully rendered on the user’s side. Nathalie created a basic list data structure which stored timestamps along with some world coordinate. I added logic which sequentially iterated through this list, checking whether the Jetson’s timestamps matched the timestamp from within the script, and then displaying the respective colors on the screen depending on what the Jetson communicated to the iPhone. This iterative search was also destructive (elements were being popped off the list from the front, as in a queue data structure). This is because both the timestamps in the queue and the timestamps received from the Jetson are monotonically increasing, so we never have to worry about matching a timestamp with something that was in the past. In the state that I left the system in, we were able to draw small segments depending on the Jetson’s data, but Harshul and I are still working together to make sure that the area which is displayed on the AR application correctly reflects the camera’s view. As a group, we have conducted experiments to find out what the correct transformation matrix would be for this situation, and it now needs to be integrated. Harshul has already written some logic for this, and I simply need to tie his logic to the algorithm that I have been using. I do not expect this to take very long.
I have also updated the timestamps on the Jetson’s side and the iPhone side to be interpreted as the data type Double. This is because we are able to achieve much lower granularity and send a higher volume of data. I have reduced the granularity to 10 messages per second, which is an incredible improvement from one message every 5.5 seconds from before. If we wish to increase granularity, the refactoring process would be very short. Again, the bottleneck is now the camera’s frame rate, rather than any of the software/scripts we are running.
Earlier in the week, I spent some time with Nathalie mounting the camera and the Jetson. I mounted the camera to the vacuum at the angle which I tested for, and Nathalie helped secure the rest of the system. Harshul designed and cut out the actual hardware components, and Nathalie and I worked together to mount the Jetson’s system to the vacuum. Harshul handled the image tracking side of things. We now only need to mount the active illumination in the following week.
One consideration I had though of in order to refactor/reduce network traffic is to simply not send any bluetooth message from the Jetson to the iPhone in the case that something was clean. This at first seems like a good idea, but this was something I ended up scrapping. Consider the case where a location was initially flagged as dirty. If a user runs the vacuum back over this same location and cleans it, they should recognize that the floor is now clean. Implementing this change would cause the user to never know whether their floor had been cleaned properly.
As for next steps, we have just a little more to do for mounting, and the bulk of the next week will presumably be documentation and testing. I do not think we have any large blockers left, and feel like our group is in much better shape for the final demo.

Team Status Report for 4/20

Risks & Challenges

Our efforts are directed towards sketching out the hardware, involving laser cutting acrylic for mounts. This is a potential risk because the mounting hardware may not actually act as intended, in which case we would need to come up with a new hardware approach with different materials, like 3D designing a mount and then gorilla gluing it to mount. Luckily, the hardware components are not extensive because we only need to mount a small camera and two lights. Nathalie’s research led to consideration of SCNPath for indicating the covered area. However, this has challenges in modifying attributes like color retroactively based on real-time data and keeping lines in memory as it is with SCNLine. She has been working on path coloration logic, but this is dependent on the process of integrating it with Bluetooth message reception. Issues with the Bluetooth include its reliability and speed. Erin has been working on attempting to speedup the BLE data transmission. Unfortunately, it may be bottlenecked, and we are unsure if we are able to get any additional speedup beyond what our current system offers. Our biggest challenge is time with the final demo looming, so effective prioritization of tasks is super important.

Schedule

Thankfully, we are back on track this week after figuring out all the issues with the BLE transmission. We have mounted the AR detection image, as well as the Jetson camera. The Jetson computing source has yet to be fully attached to the system, as we are keeping it independent in the case that we need to continue development. The phone mount has also been attached to the vacuum system. We are currently working on final stages of testing and verification, as well as the final presentation, demo, and poster.

As you’ve designed, implemented and debugged your project, what new tools or new knowledge did you find it necessary to learn to be able to accomplish these tasks? What learning strategies did you use to acquire this new knowledge?

Tools used: laser cutting, material management, setting objectives for each meeting

Learning strategies: constant communication, goal setting for each meeting so that we knew what needed to be accomplished in the work sessions, selecting a Plan B/C alternative in case our original plan doesn’t work

We’ve encountered various challenges that required us to learn new tools and knowledge to overcome them. Most of our ECE experience has been with software, so we had to learn and coordinate designing mounts using laser cutting materials of acrylic and wood. Something else we learnt was how to handle Apple documentation about ARKit and how to handle the official documentation/implementations alongside the developer community’s add-on solutions ARKit. SCNPath and SCNLine are created by a freelance developer before (maxfrazer) before he began working for Apple.

To acquire these new skills and technical knowledge, we relied on constant communication and goal setting for our meetings. We made sure to discuss our progress and challenges regularly, setting clear objectives for each meeting to ensure that we were aligned on what needed to be accomplished. That way, we could prioritize which  tasks were related to our system integration, and then allow feature improvements to occur asynchronously on our own time.

Erin’s Status Report for 4/20

Over this past week, I have been working on the bluetooth transmission system between the Jetson and the iPhone, as well as mounting the entire dirt detection system. I have also been working on tuning the dirt detection algorithm, as the algorithm I previously selected operated solely on static images. When working with the entire end-to-end system, the inputs to the Jetson camera are different than the inputs I used to test my algorithms. This is because the Jetson camera is limited in terms of photo quality, and it is not able to capture certain nuances in the images when the vacuum is in motion, whereas these details may have been picked up when the system was at rest. Moreover, the bluetooth transmission system is now fully functional. I obtained a 800% speedup; I previously was working with a 45 second average delay between messages. After calibrating the Jetson’s peripheral script, I was able to reduce the delay between messages to be just 5.5 seconds (on average). This is still not as fast as we had initially hoped, but I recognize that running the dirt detection script will incur a delay, and that it may not be able to granulize this window any further. I am still looking to see if there are any remaining ways to speed up this part of the system, but this is no longer the main area of concern which I am working with.
Additionally, prior to this past week, the BLE system would often cause Segmentation Faults on the Jetson’s side when terminating the script. This was a rather pressing issue, as recurring Segmentation Faults would cause the Bluetooth system to require reboots at a regular rate. This is not maintainable, nor is it desired behavior for our system. I was able to mitigate this issue by refactoring the iPhone’s bluetooth connection script. Prior to this week, I had not worked too extensively with the Swift side of things; I was focused more on the Python scripting and the Jetson. This week, I helped Nathalie with her portion of the project, as she had heavy involvement with the data which the Jetson was transmitting to the iPhone. After a short onboarding session with my group, I was able to make the cleanliness data visible from the AR mapping functions, rather than only within scope for the BLE connection. Nathalie now has access to all the data which the Jetson sends to the user, and will be able to fully test and tune the parameters for the mapping. We no longer have any necessity for backup Bluetooth options.
Beyond the software, I worked closely with my group to mount the hardware into the vacuum system. Since I was the one who performed the tests which determined the optimal angles for which to mount the camera, I was the one who ultimately put the system together. Currently, I have fully mounted the Jetson camera to the vacuum. We will need to affix the active illumination system alongside the Jetson camera which I am not too worried about. The new battery pack we purchased for the Jetson has arrived, and once we finish installing WiFi capabilities onto the Jetson, we will be able to attach both the battery system and the Jetson itself to the vacuum system.
I have since started running some tests on the end-to-end system; I am currently trying to figure out the bounding boxes of the Jetson camera. Since the camera lends itself to a bit of a “fisheye” shape, the image that we retrieve from the CSI camera may need to be slightly cropped. This is easily configurable, and will simply require more testing.
Over the next week, I intend to wrap up some testing with the dirt detection system. The computer vision algorithm may need a little bit more of slight tuning, as the context in which we are running the system can highly change the performance of the algorithm. Moreover, I hope to get WiFi installed onto the Jetson by tomorrow. This is causing issues with the BLE system, as we are relying on UNIX timestamps to match location data from the AR app with the Jetson’s output feed. Without a stable WiFi connection, the clock on the Jetson is constantly out of sync. This is not a hard task, and I do not foresee this causing any issues in the upcoming days. I also plan to aid my group members in any aspect of their tasks which they may be struggling with.

Erin’s Status Report for 4/6

This past week, I have been working on further integration of BLE (Bluetooth Low Energy) into our existing system. Last Monday, I had produced a fully functional system after resolving a few Python dependency issues. The Jetson was able to establish an ongoing BLE connection with my iPhone, and I had integrated the dirt detection Python script into the Bluetooth pipeline as well. There were a couple issues that I had encountered when trying to perform this integration:

  1. Dependency issues: The Python version which is required to run the dirt detection algorithm is different from the version which supports the necessary packages for BLE. In particular, the package I have been using (and the package which most Nvidia forums suggest) has a dependency on Python3.6. I was unable to get the same package working with Python3.8 as the active Python version. Unfortunately, the bless package which spearheads the BLE connection from the Jetson to the iPhone is not compatible with Python3.6—I had resolved this issue by running Python3.8 instead. Since we need one end-to-end script running which queries results from the dirt detection module and send data via BLE using the bless Bluetooth script, I needed to keep two conflicting Python versions active at the same time, and run one script which is able to utilize both of them.
  2. Speed constraints: While I was able to get the entire dirt detection system running, the granularity at which the iPhone is able to receive data is incredibly low. Currently, I have only been able to get the Jetson to send one message about every 25-30 seconds, which is not nearly enough with respect to what our use-case requirements have declared. This issue is further exaggerated by the constraints of BLE. The system is incredibly limited on the amount of data it is able to send at once. In other words, I am unable to batch multiple runs of the dirt detection algorithm and send them to the iPhone as a single message, as that message length would exceed the capacity that the BLE connection is able to uphold. One thought that I had was to send a singular timestamp, and then send over a number of binary values, denoting whether the floor was clean or dirty within next couple seconds (or milliseconds) following the original timestamp. This implementation has not yet been tested, but I plan to create and run this script within the week.
  3. Jetson compatibility: On Monday, the script was fully functional, as shown by the images I have attached below. However, after refactoring the BLE script and running tests for about three hours, the systemd-networkd system daemon on the Jetson errored out, and the bluetooth connection began failing consistently upon reruns of the BLE script. After spending an entire day trying to patch this system problem, I had no choice but to completely reflash the Jetson’s operating system. This resulted in a number of blockers which have taken me multiple days to fix, and some are still currently being patched. The Jetson does not come with built in WiFi support, and lacks the necessary drivers for native support of our WiFi dongle. Upon reflashing the Jetson’s OS, I was unable to install, patch, download, or upgrade anything on the Jetson, as I was unable to connect the machine to the internet. Eventually, I was able to get ahold of an Ethernet dongle, and I have since fixed the WiFi issue. The Jetson now no longer relies on a WiFi dongle to operate, although now I am able to easily set that up, if necessary. In addition, while trying to restore the Jetson to its previous state (before the system daemon crashed), I ran into many issues while trying to install the correct OpenCV version. It turns out that the built-in Linux package for OpenCV does not contain the necessary packages for the Jetson’s CSI camera. As such, the dirt detection script that I had was rendered useless due to a version compatibility issue. I was unable to install the gstreamer package; I had tried everything from Linux command line installation to a manual build of OpenCV with the necessary build flags. After being blocked for a day, I ended up reflashing the OS, yet again.

This week has honestly felt rather unproductive even though I have put in many hours; I have not dealt with so many dependency conflicts before. Moving forward, my main goal is to restore the dirt detection and BLE system to its previous state. I have the old, fully function code committed to Github, and the only thing left to do is to set up the environment on the Jetson, which has proven to be anything but trivial. Once I get this set up, I am hoping to significantly speed up the BLE transmission. If this is not possible, I will look into further ways to ensure that our AR system is not missing data from the Jetson.

Furthermore, I have devised tests for the BLE and dirt detection subsystems. They are as follows:

  • BLE Connection: Try to send dirty/clean messages thirty times. Find delay between each test. Consider the summary statistics of this data which we collect. Assuming the standard deviation is not large, we can focus more on the mean value. We hope for the mean value to be around 0.1s, which gives us fps=10. This may not be achievable given the constraints of BLE, but with these tests, we will be able to determine what our system can achieve.
  • Dirt Detection
    • Take ten photos each of:
      • Clean floor
      • Unclean floor, sparse dirt
      • Unclean floor, heavy dirt
    • Assert that the dirt detection module classifies all ten images of the clean floor as “clean”, and all twenty images of the unclean floor as “dirty”. If this is not met, we will need to further tune the thresholds of the dirt detection algorithm.

Erin’s Status Report for 3/30

This week, I worked primarily on trying to get our Jetson hardware components configured so that they could run the software that we have been developing. Per our design, the Jetson is meant to run a Python script with a continuous video feed coming from the camera which is attached to the device. The Python script’s input is the aforementioned video feed from the Jetson camera, and the output is a timestamp along with a single boolean, detailing whether dirt has been detected on the image. The defined inputs and outputs of this file have changed since last week; originally I had planned to output a list of coordinates where dirt has been detected, but upon additional thought, mapping the coordinates from the Jetson camera may be too difficult a task, let alone the specific pixels highlighted by the dirt detection script. Rather, we have opted to simply send a single boolean, and narrow the window of the image where the dirt detection script is concerned. This decision was made for two reasons: 1) mapping specific pixels from the Jetson to the AR application in Swift may not be feasible with the limited resources we have, and 2) there is a range for which the active illumination works best, and where the camera is able to collect the cleanest images. I have decided to focus on that region of the camera’s input when processing the images through the dirt detection script.

Getting the Jetson camera fully set up was not as seamless as it had originally seemed. I did not have any prior trouble with the camera until recently, when I started seeing errors in the Jetson’s terminal claiming that there was no camera connected. In addition, the device failed to show up when I listed the camera sources, and if I tried to run any command (or scripts) which queried from the camera’s input, the output would declare the image’s width and height dimensions were both zero. This confused me, as this issue had not previously presented itself in any earlier testing I had done. After some digging, I found out that this problem typically arises when the Jetson is booted up without the camera inserted. In other words, in order for the CSI camera to be properly read as a camera input, it must be connected before the Jetson has been turned on. If the Jetson is unable to detect the camera, a restart may be necessary.

Another roadblock I had run into this week was trying to develop on the Jetson without an external monitor. I have not been developing directly on the Jetson device, since it has not been strictly necessary for the dirt detection algorithms to work. However, I am currently trying to get the data serialization to work, which requires extensive interaction with the Jetson. Since I have been moving around, carrying a monitor around with me is simply impractical. As such, I have been working with the Jetson connected to my laptop, rather than an external monitor with a USB mouse and keyboard. I used the `sudo screen` command in order to see the terminal of the Jetson, which is technically enough to get our project working, but I encountered many setbacks. For once, I was unable to view image outputs via the command line. When I was on campus, the process to getting the WiFi system set up on the Jetson was also incredibly long and annoying, since I only had access to command line arguments. I ended up using the command line tools from `nmcli` to connect to CMU-SECURE, and only then was I able to fetch the necessary files from Github. Since getting back home, I have been able to get a lot more done, since I have access to the GUI.

I am currently working on trying to get the Jetson to connect to an iPhone via a Bluetooth connection. To start, getting Bluetooth set up on the Jetson was honestly a bit of a pain. We are using a Kinivo BT-400 dongle, which is compatible with Linux. However, the device was not locatable by the Jetson when I first tried plugging it in, and there were continued issues with Bluetooth setup. Eventually, I found out that there were issues with the drivers, and I had to completely wipe and restore the hardware configuration files on the Jetson. The Bluetooth dongle seems to have started working after the driver update and a restart. I have also found a command line argument (bluetooth-sendto –device=[MAC_ADDRESS] file_path) which can be used to send files from the Jetson to another device. I have already written a bash script which can run this command, but sadly, this may not be usable. Unfortunately, Apple seems to have placed certain security measures on their devices, and I am not sure that I will find a way to circumvent those within the remaining time we have in the semester (if at all). An alternative option which Apple does allow is BLE, which stands for Bluetooth Low Energy. This is a technology which is used by CoreBluetooth, a framework which can be used in Swift. The next steps for me are to create a dummy app which uses the CoreBluetooth framework, and show that I am able to receive data from the Jetson. Note that this communication does not have to be two-way; the Jetson does not need to be able to receive any data from the iPhone; the iPhone simply needs to be able to read the serialized data from the Python script. If I am unable to get the Bluetooth connection established, at worst case, I am planning to have the Jetson continuously push data to either some website, or even a Github repository, which can then be read by the AR application. Obviously, doing this would incur higher delays and is not scalable, but this is a last resort. Regardless, the Bluetooth connection process is something I am still working on, and this is my priority for the foreseeable future.

Team Status Report for 3/30

Risks

Our challenges currently lie with integrating the subsystems that we have all been working on in parallel. From our discussions this week, we have decided on data models for the output of Erin’s dirt detection algorithms which are the inputs to Nathalie and Harshul’s AR mapping algorithms. Current risks lie in establishing Bluetooth communication between the Jetson camera and the iPhone: we have set up the connection as receiving/sending and see the available device, but Apple’s black-box security measures prevent us from currently sending files. There have been developers that were able to circumvent this in the past, and so we are actively researching what methods they used. At the same time, we are actively exploring workarounds and have contingency plans in place. Options include employing web communication via HTTP requests or utilizing file read/write operations.

Other risks include potentially slow drawing functions when tracking the camera. Right now, there seems to be a lot of latency that impacts the usability of our system, so we are researching different methods in ARKit that can be used in a faster way. To address this, Nathalie is exploring alternatives such as utilizing the SCNLine module to potentially enhance performance. Similarly, Harshul is working on creating child nodes in a plane to see which is faster. We can always use GPU/CUDA if needed for additional speed up.

In addition, we have our main software components making progress but need to focus on how to design and mount hardware. This is a challenge because none of us have extensive experience in CAD or 3D printing, and we are in the process of deciding how to mount the hardware components (Jetson, camera, active illumination) such that it fits our ideal criteria (i.e. the camera needs to be mounted at the identified height and angle). Doing so earlier (immediately after the demos) will allow us to iterate through different hardware methods and try different mounts that we design to figure out what holds the most stability while not compromising image quality.

 

Schedule

In the coming week, we plan to flush out a plan for how to mount our hardware on our vacuum. We have already set up the Jetson such that it will be easy to fasten to the existing system, but the camera and its positioning are more challenging to engineer mounts for. In addition, the AR iPhone application is nearing the end of its development cycle, as we are moreso working on optimizations rather than core features. We are considering options for how to mount the iPhone as well. Nathalie has been working on how to pinpoint the location of the rear camera view based on the timestamps received from the Jetson. This may still need to be tweaked after we get the Bluetooth connection to be fully functional, as this is one of the main action items we have for the coming week.

Erin’s Status Report for 3/23

This week, I continued to try and improve the existing dirt detection model that we have. I also tried to start designing the mounting units for the Jetson camera and computer, but I realized I was only able to draw basic sketches of what I imagined the components to look like, as I have no experience with CAD and 3D modeling. I have voiced my concerns about this with my group, and we have planned to sync on this topic. Regarding the dirt detection, I had a long discussion with a friend who specializes in machine learning. We discussed the tradeoffs of the current approach that I am using. The old script, which produced the results shown in last week’s status report, relies heavily on Canny edge detection when classifying pixels to be either clean or dirty. The alternative approach my friend suggested was to use color to my advantage. Something I hadn’t realized earlier was that our use-case constraints give me the ability to use color to my advantage. Since our use-case confines the flooring to be white and patternless, I am able to assume that anything “white” is clean, assuming that the camera is able to capture dirt which is visible to the human eye from a five foot distance. Moreover, we are using active illumination. In theory, since our LED is green, I can simply try to threshold all values between certain shades of green to be “clean”, and any darker colors, or colors with grayer tones, to be dirt. This is because (in theory), the dirt that encounter will have height component. As such, when the light from the LED is shined onto the particle, the particle will cast a shadow behind it, which will be picked up by the camera. With this approach, I would only need to look for shadows, rather than look for the actual dirt in the image frame. Unfortunately, my first few attempts at working with this approach did not produce satisfactory results, but this would be a lot less computationally expensive than the current script that I am running for dirt detection, as it relies heavily on the CPU intensive package, OpenCV.

I also intend to get the AR development set up fully on my end soon. Since Nathalie and Harshul have been busy, they have not had the time to sync with me and get my development environment fully set up, although I have been caught up to speed regarding the capabilities and restrictions of the current working model.

My next steps as of this moment are to figure out the serialization of the data from the Jetson to the iPhone. While the dirt detection script is still imperfect, this is a nonblocking issue. I currently am in possession of most of our hardware, and I intend on getting the data serialization via Bluetooth done within the next week. This also will allow me to start benchmarking the delay that it will take for data to get from the Jetson to the iPhone, which is one of the components of delay we are worried about with regard to our use case requirements. We have shuffled around the tasks slightly, and so I will not be integrating the dirt right now; I will simply be working on data serialization. The underlying idea here is that even if I am serializing garbage data, that is fine; we simply need to be able to gauge how well the Bluetooth data transmission is. If we need to figure out a more efficient method, I can look into removing image metadata, which would reduce the size of the packets during the data transfer.

Team Status Report for 3/23

Risks

With the augmented reality floor mapping base implementation done, we are about to move into marking/erasing the overlay. Nathalie and Harshul have discussed multiple implementation strategies for marking coverage, and are not entirely sure which approach will be most successful – this is something we will determine when working on it this week. Our initial thought is to combine the work that we have each done separately (Nathalie having mapped the floor and Harshul creating logic to change plane color on tap). Specifically, we want to add more nodes in a specific shape to the floor plane in a different color, like red, with the diameter of the shape equivalent to the width of the floor vacuum. Still, we need to figure out first how to do that, and once it works what shape would best capture the vacuum coverage dimensions. This is important because the visual representation of coverage is essential to our project actually working. As a fallback, we have experimented with the World Tracking Configuration logic which is able to capture our location in space and are willing to explore how our alternative approaches might work to solve the problem of creating visual indicators on a frozen floor map.

The core challenge is that upon freezing map updates we run the risk of odometry and drift of objects as we move around the room and tracking information changes, but doesn’t propagate to the actual planes drawn in the scene. However keeping the map dynamic mitigates this but then prevents consistency in the actual dimensions of our plane which make it difficult to measure and benchmark our coverage requirements. One mitigation method would be to have custom update renderers to avoid redefining plane boundaries but possibly allow their anchor position to change.

Another challenge that our group is currently facing is the accuracy of the mapping. While we addressed this issue before, the problem still stands. At this time, we have not been able to get the ARKit mappings to reflect the error rates that we desire, as specified by our use case requirements. This is due to the constraints of Apple’s hardware and software, and tuning these models may not be a viable option, giving the remaining time we have for the rest of the semester. Our group has discussed readjusting our error bounds in our use case requirements, and this is something we plan to flush out within the week.

We also need to get started on designing and productionizing all the hardware components we need in order to assemble our product end to end. The mounts for the Jetson hardware as well as the active illumination LEDs need to be custom made, which means that we may need to go through multiple iterations of the product before we are able to find a configuration that works well with our existing hardware. Since the turnaround is tight considering our interim demo is quickly approaching, we may not be able to demonstrate our project as an end-to-end product; rather, we may have to show it in terms of the components that we have already tested. 

Scheduling 

We are now one week away from the interim demo. The last AR core feature we need to do is plane erasure. We’ve successfully tracked the phone’s coordinates and drawing that in the scene. The next step is to project that data into the floor plane. This would leave the AR subsystem ready to demo. Since our camera positioning has been finalized, we are beginning to move forward with designing and 3D printing the mounting hardware. Next milestones will entail a user friendly integration of our app features as well as working on communication between the Jetson and the iPhone.



Erin’s Status Report for 3/16

The focus of my work this week was calibrating the height and angle at which we would mount the Jetson camera. In the past two weeks, I spent a sizable amount of my time creating a procedure any member of my group could easily follow to determine the best configuration for the Jetson camera. This week, I performed the tests, and have produced a preliminary result: I believe that mounting the camera at four inches above floor level and a forty-five degree angle will produce the highest quality results for our dirt detection component. Note that these results are subject to change, as our group could potentially conduct further testing at a higher granularity. In addition, I still have to sync with the rest of my group members in person to discuss matters further.

Aside from running the actual experiments to determine the optimal height and angle for the Jetson camera, I also included our active illumination into the dirt detection module. We previously received our LED component, but we had run the dirt detection algorithms without it. Incorporating this element into the dirt detection system gives us a more holistic understanding of how well our computer vision algorithm performs with respect to our use case. As shown by the images I used for the testing inputs, my setup wasn’t perfect—the “background”, or “flooring” is not an untextured, patternless white surface. I was unable to perfectly mimic our use case scenario, as we have not yet purchased the boards that we intend to use to demonstrate the dirt detection component of our product. Instead, I used paper napkins to simulate the white flooring that we required in our use case constraints. While imperfect, this setup configuration suffices for our testing.

Prior to running the Jetson camera mount experiment, I had been operating under the assumption that the output of this experiment would depend heavily on the outputs that the computer vision script generated. However, I realized that for certain input images, running the computer vision script was wholly unnecessary; the input image itself did not meet our standards, and that camera configuration should not have be considered, regardless of the performance of the computer vision script. For example, at a height of two inches and an angle of zero degrees, the camera was barely able to capture an image of any worthy substance. This is shown by Figure 1 (below). There is far too little workable data within the frame; it does not capture the true essence of the flooring over which the vacuum had just passed. As such, this input image alone rules out this height and angle as a candidate for our Jetson camera mount.

Figure 1: Camera Height (2in), Angle (0°)

I also spent a considerable amount of time refactoring and rewriting the computer vision script that I was using for dirt detection. I have produced a second algorithm which relies more heavily on OpenCV’s built in functions, rather than preprocessing the inputs myself. While the output of my test image (the chosen image from the Jetson camera mount experiment) against this new algorithm does appear to be slightly more noisy than we would like, I did not consider this an incredibly substantial issue. This is because the input image itself was noisy; our use case encompasses patternless, white flooring, but the napkins in the image were highly textured. In this scenario, I believe that the fact that algorithm detected the napkin patterning is actually beneficial to our testing, which is a factor that I failed to consider the last couple of times I had tried to re-tune the computer vision script.

I am slightly behind schedule with regard to the plan described by our Gantt chart. However, this issue can (and will) be mitigated by syncing with the rest of my group. In order to perform the ARKit Dirt Integration step of our scheduled plan, Nathalie and Harshul will need to have the AR component working in terms of real-time updates and localization.

Within the next week, I hope to help Nathalie and Harshul with any areas of concern in the AR component. In addition, I plan to start designing the camera mount, and place an order for the Jetson camera extension cord, as we have decided that the Jetson will not be mounted very close to the camera.