Pablo’s Status Update for 21/11

This week I worked on getting approval for setting up in sorrels, putting together a housing for the node, and doing the reading assignment. I went to Sorrells and got the contact information for the people I need to talk to (Amy Perrier and Barry
Schles) and contacted them, but unfortunately didn’t receive a response before leaving Pittsburgh unfortunately. Earlier this week I made a mock up of the housing for the node and reworked it since the portable chargers were larger than anticipated. The housing is ready to be laser cut when I return to campus.

With this week, I am behind schedule and will need to work over Thanksgiving break to catch up. I plan on gathering datasets from my home to help the image diversity for the ML model and integrating server code when that is finished since I have now been shipped the final set of components for the full node network and am ready for implementation.

Arjun’s Status Update for 21/11

This week I could not work on the project as much due to a large amount of other commitments. I had discussed with Krish about whether we needed any data pre-processing such as image stitching for the Machine Learning model that he was working on for chair detection. We decided that it wasn’t necessary for the situation, and that we would not worry about implementing that for the future.

Krish’s Status Update for 21/11

This week, I worked a bit more on the machine learning model. At the review, Professor Yu had suggested I make a small dataset from a few images of my desk. While this is not ideal, it does help me start some of the work. I have faced some issues with the model which I plan on debugging on Monday and after Thanksgiving break. I have two main issues right now, namely overfitting and sensitivity to hyperparameters.

Overfitting. With a small dataset, it is easy to make a machine learning model that memorizes the specific output expected on the training data. This does not identify patterns and therefore doesn’t generalize well. Due to the delays in acquiring data, I haven’t been able to collect a dataset of the right size, so the model is not as strong as I would like it to be.

Sensitivity to hyperparameters. Despite having only a small dataset, there are issues even with learning the training data. Small changes in the training hyperparameters cause large changes to the accuracy of inference on the training dataset. This will get worse as the dataset increases in size. This is a more fundamental issue that I need to research more before I attempt to fix it. My guess is that there is an issue with running darknet on a Jupyter notebook. (Darknet is the platform on which YOLO is written). One fix that I want to explore is changing the version of YOLO to one that was written in PyTorch (i.e. Python). This might fix the issue.

I also wrote a short python script that solves the problem I described in the last status update. One issue that Pablo is facing is that some of the images lose all their data and the bottom part of the image becomes a set of vertical lines. I described the approach in the last status update, but I had not implemented it then. Since then, there have been no problems with implementing my approach.

Arjun’s Status Report for 14/11

This week I was mainly dealing with concurrency on the current implementation of the central node. The current working implementation was in python, and I was working with the “asyncio” library on python for the utilization of pthreads. However, I was running into issues working with the asyncio library and the program not running correctly due to the library not being completely thread safe and the program running slower than expected. I decided to switch to a select() based concurrency model, which means that the central node server would scan a set of connections to see which needs reading and loop through the connections the read from the connections deemed to be ready. This did not hinder performance of the central node and was still within the metrics that we had set (10 seconds for receiving data, 30s total). I also had a discussion with Pablo about the Jetson Nano and the Wifi adapter, which is being ordered to Pablo’s place to set up there.

Pablo’s Status Update for 14/11

This week, I worked on getting low power mode working with the ArduCam and testing the battery life of the nodes. The current battery life was almost 4 days (5500 images captured and uploaded), so once low power mode is properly implemented and it passes the timing requirements, we should be well within all requirements for the node. I encountered an issue when doing battery life testing that a portion of my images were not fully uploaded, leaving vertical lines at the bottom, and some were not even recognized as images. I’m hoping that this is an issue due to me using a very simple test server I set up and uploading at 6 times the anticipated rate, but I’ve talked with Krish about it and it shouldn’t be too hard of a problem to overcome for the preliminary data sets.

I am currently on schedule, but next week will be very tight as I am still waiting on server code to integrate, confirmation of approval to mount nodes in Sorrells, the reading assignment, and heading home on Friday.

Krish’s Status Update for 14/11

There is still not much that I could have done this week, before we get the data from Sorell’s. However, I did manage to find some useful resources. Specifically, I found a website called Roboflow, which will allow me to take my labelled training data and run some preprocessing on them. This is different from the preprocessing that we plan on running in the central node, as it specifically pertains to the machine learning model.

The main advantage that Roboflow offers is that it will help me convert images from XML to the darknet format in bulk. For the initial picture of my workspace that I used to test the pipeline, I did this manually. Now that I have found Roboflow, I can do this automatically, saving a lot of time to process thousands of images.

Another advantage that Roboflow offers is increasing the size of my dataset. It lets me perform transformations like rotation, scaling and blurring on duplicates of the images in the dataset. With combinations of these transformations I could increase the size of my dataset by a factor of 3-10. One consideration I will need to make is that this compromises on the quality of the dataset, since the images will have some similarity among themselves.

On a different note, one issue that Pablo brought to my attention is where some of the images taken are distorted. The bottom part of the image is cut off and replaced with vertical lines, as shown in this picture. Pablo mentioned this was due to a wiring issue, but I am also planning on solving this problem in software.

Bad Quality Image

One thing to note is that the lines that cause the distortion are all perfectly vertical and they are always at the bottom of the picture. This can be detected using a vertical Sobel filter. The Sobel filter is a high pass filter for images in one dimension. Since there is no change vertically in the bad part of the image, there is only a DC bias. A high pass filter will remove this bias and leave the bottom half of the image to be all zeros. After that, we simply need to compare the last bottom lines of the image to zeros in order to detect this kind of error.

Arjun’s Status Update for 7/11

This week, I was able utilize a test client I created that sent small images (40-70 kB) over TCP and have the central node properly receive it the full image, indicated by the central node program. I was able to properly run the test on the Jetson Nano as well.Pablo and I also discussed how we wanted the central and camera nodes to communicate with each other. We finalized that we wanted them to communicate via TCP instead of using HTTP because we didn’t need to utilize the full HTTP protocol to detect images, so using HTTP was redundant. We could not test any code for that since Pablo was working on these parts for the camera node this week.

Krish’s Status Update for 7/11

I was not able to do much work this week on the project, due to my other commitments. Next week, we should have some data available, so that I can start training the machine learning model.

Team Status Update for 7/11

A change in design was made this week to the Camera Node. In order to draw enough voltage from the LiPo batteries to supply the camera module, a 5V boost converter was needed. We decided on instead using 5V portable chargers. Cost-wise, this was the cheaper solution, however we would be losing out on the functionality of being able to remotely read the battery charge. This would introduce the risk of running out of battery without us knowing. To combat this, we chose portable batteries with over 5 times the capacity of our LiPo batteries, so our node will certainly meet the 72 hour uptime requirement. We will also now notify the site when a node disconnects, but we should be monitoring how long the nodes have been active and recharge them well before they drain completely.

Camera Node Schedule has been updated and pushed back a week. Other tasks including dependencies have been updated accordingly, there is no major shift in final end time.

Pablo’s Status Update for 7/11

This week, I finished the single node. I had to overcome two major problems: Outdated libraries and incompatible hardware. The libraries for the ArduCam were written for Arduino and were reworked by someone else to work for the Particle Photon. Unfortunately, the pinout and macros for the Particle Argon are different, so this introduced a slew of errors. Luckily, I managed to rework the library much faster than anticpated and got the Camera Node up and running. The next issue came when trying to make the node completely standalone, the lack of voltage from the LiPo battery. I realized I was running into an error when removing from the computer because it was drawing extra voltage from the micro usb. The solution I used was moving from LiPo, to portable chargers. With this I now have images being captured and sent remotely over TCP! (picture from the Camera Node below!)

To be quite honest, I wanted to have this done much earlier, but the past week has been rough mentally due to the election. I am slightly behind, but readjusted our gantt chart and are still within the margin we laid out for ourselves. I anticipate to be completely done, with the Camera Node network set up in Sorrells 2 weeks from now. The next week, I plan on building the housing for the node and capturing the preliminary data set from my apartment.