(Please use this link to the pdf, not the one on the sidebar. That’s the old link to the docx with messed up formatting and I can’t figure out how to remove it ;_; )
Team B4: Smart Library
Smart Library is the intelligent service that allows you to see where socially distant seating is available in any public space, without having to visit the space yourself. Designed for CMU’s Sorrells library (but hosted in Pablo’s dining room due to Covid) , our node network will collect images and use a machine learning algorithm to determine which seats are available and safe to sit in. Smart Library does not save or collect any identifying information determine seat occupanct. Smart Library was created by Pablo Wilson, Krish Vaswani, and Arjun Raguram.
Video Link: https://www.youtube.com/watch?v=2dljjIvalZc
Team Status Update for 5/12
Since campus is now completely closed and Pablo is now based in Virginia, there was some reorganizing that needed to be done. First of all, with the library closed, we had to find a new location to capture images. The new location was Pablo’s dining room table, capturing the 4 chairs there. With this new set up, Pablo needed a couple of extra parts to get the image capture nodes set up(things like wires and breadboards), so he was a little behind, but data is now being captured and uploaded. The rest of the system is following close behind.
Arjun’s Status Update for 5/12
This week Pablo and I discussed integration steps of the Camera and Central Node code. The Central Node could still receive requests, but due the filesize of the images being larger than I expected (1 MB), the central node could not receive all of the image. This resulted in the code for receiving TCP messages to change a good bit. The connection resets early or the connection hangs in an infinite loop while trying to add larger file sizes. This is something I am currently still working on, but can find a solution for by the time of the final video and report. I am considering going back to a C solution due to pre-processing not necessary anymore. This would help due to receiving strings/buffers being able to use heap allocation effectively.
Pablo’s Status Update for 5/12
This week, the focus was on getting the system integrated and set up in my dining room. The breadboards and wires luckily arrived sooner than expected and I was able to have the image capture nodes set up! Image from the node below:
Still awaiting final integration with the server as I don’t have the server code yet, but an easy workaround is just automated batch uploading of the images to the model. I am task complete on my end and will now transition to helping out other areas to ensure we are fully functional by final presentation time.
Krish’s Status Update for 5/12
Last week, I finished making the model using my custom dataset. I came across a major problem when it came to working the YOLO model, which is what we had previously intended. Initially, I had thought that an object detection model like YOLO was ideal, since it is trained to pick up different objects in a scene. However, one shortcoming I did not foresee was that the images we would collect in our system were not like the natural images, because they were taken from an aerial view. The following two pictures display this disparity.
In the first image, we see the picture is taken from the side. The second image is taken from the top. For human beings, it is easier to identify the presence or absence of other humans, but for a machine learning model, this cannot be abstracted.
In order to fix this problem, I decided to use my own machine learning model, without any pretrained weights. The advantage of this is that it will solely focus on data that we have fed it, and so it doesn’t need to depend on natural images. However, the disadvantage is that I need to make a simpler model, since there is less data availability. For this reason I had to adapt the model, where instead of the model finding the location of the seats, I specify the location of the seats. Then, the algorithm would crop out each seat, resize the pictures and identify whether there was a person in each seat or not. I thought this was a fair compromise, given that the seats in Pablo’s dining room are in a relatively fixed position, and so is the camera that we set up. If we were to take this project further, there would be an extra cost associated with installing the system in a new location, but this cost would be negligible compared to the effort it would take to mount the camera and central nodes at the location.
Additionally, I worked on the frontend of the website. When we had planned on setting up in Sorrell’s Library, I did not know the layout of the seats, especially since we were not sure where we could have mounted the cameras. Once I got a few pictures of Pablo’s dining room, I could understand the layout and set up the website to mirror that layout. Right now, the website can read four occupation bits and display an appropriate html page with red and green colors based on availability, at different locations representing locations at the dining table.
Pablo’s Status Update for 21/11
This week I worked on getting approval for setting up in sorrels, putting together a housing for the node, and doing the reading assignment. I went to Sorrells and got the contact information for the people I need to talk to (Amy Perrier and Barry
Schles) and contacted them, but unfortunately didn’t receive a response before leaving Pittsburgh unfortunately. Earlier this week I made a mock up of the housing for the node and reworked it since the portable chargers were larger than anticipated. The housing is ready to be laser cut when I return to campus.
With this week, I am behind schedule and will need to work over Thanksgiving break to catch up. I plan on gathering datasets from my home to help the image diversity for the ML model and integrating server code when that is finished since I have now been shipped the final set of components for the full node network and am ready for implementation.
Arjun’s Status Update for 21/11
This week I could not work on the project as much due to a large amount of other commitments. I had discussed with Krish about whether we needed any data pre-processing such as image stitching for the Machine Learning model that he was working on for chair detection. We decided that it wasn’t necessary for the situation, and that we would not worry about implementing that for the future.
Krish’s Status Update for 21/11
This week, I worked a bit more on the machine learning model. At the review, Professor Yu had suggested I make a small dataset from a few images of my desk. While this is not ideal, it does help me start some of the work. I have faced some issues with the model which I plan on debugging on Monday and after Thanksgiving break. I have two main issues right now, namely overfitting and sensitivity to hyperparameters.
Overfitting. With a small dataset, it is easy to make a machine learning model that memorizes the specific output expected on the training data. This does not identify patterns and therefore doesn’t generalize well. Due to the delays in acquiring data, I haven’t been able to collect a dataset of the right size, so the model is not as strong as I would like it to be.
Sensitivity to hyperparameters. Despite having only a small dataset, there are issues even with learning the training data. Small changes in the training hyperparameters cause large changes to the accuracy of inference on the training dataset. This will get worse as the dataset increases in size. This is a more fundamental issue that I need to research more before I attempt to fix it. My guess is that there is an issue with running darknet on a Jupyter notebook. (Darknet is the platform on which YOLO is written). One fix that I want to explore is changing the version of YOLO to one that was written in PyTorch (i.e. Python). This might fix the issue.
I also wrote a short python script that solves the problem I described in the last status update. One issue that Pablo is facing is that some of the images lose all their data and the bottom part of the image becomes a set of vertical lines. I described the approach in the last status update, but I had not implemented it then. Since then, there have been no problems with implementing my approach.
Arjun’s Status Report for 14/11
This week I was mainly dealing with concurrency on the current implementation of the central node. The current working implementation was in python, and I was working with the “asyncio” library on python for the utilization of pthreads. However, I was running into issues working with the asyncio library and the program not running correctly due to the library not being completely thread safe and the program running slower than expected. I decided to switch to a select() based concurrency model, which means that the central node server would scan a set of connections to see which needs reading and loop through the connections the read from the connections deemed to be ready. This did not hinder performance of the central node and was still within the metrics that we had set (10 seconds for receiving data, 30s total). I also had a discussion with Pablo about the Jetson Nano and the Wifi adapter, which is being ordered to Pablo’s place to set up there.