One of the main risks encountered is that due to the pandemic, a lot of ordering services have been slowed down, so parts that are required to begin the work process arrived late(week of 10/3). However, as of this post, all the necessary parts needed to start the first stages of development have been received, so we can begin to properly set up camera nodes to gather preliminary image data for the YOLO algorithm, as well as start TCP server-code on the central node. One change to the design is to use Google Colab instead of AWS for the development process. We are in the middle of updating our Gantt schedule to reflect a new timeline, since Krish is in a different time zone than Pablo and Arjun.
Arjun’s Status Update for 10/10
My first task is to set up preliminary/simple server-side code on an Nvidia Jetson Nano to receive TCP packets. That would involve me writing preliminary server-side code and uploading it to the Nano. I have written preliminary code that sets up a listening socket and echos text back to a client that connects with it. The parts (WiFi adapter and Nano) have arrived this week, and I am in the middle of testing the software on the Nano and getting used to using the Nano. My next goals is to get this code running properly on the Jetson Nano, and look for a replacement Wifi Adapter if any issues arise. I gave a lot of extra time for certain tasks in case anything like this(parts arriving later than expected) occurred, so we are still relatively on schedule.
Krish’s Status Update for 10/10
In the proposal presentation we had mentioned that all the code would be written on AWS. However, I realised that Google Colab is better for development. The reason for this is that Colab gives us free access to a GPU. This allows us to spend a potentially unbounded amount of time developing the model without constraining on our budget. When we are ready to deploy the code, we can then export it to AWS, because that is more robust.
This week, I set up the pipeline to train the ML model for the project. Usually this is done when the dataset is available, so that the data can be preprocessed and tested. Since in this case the data is not yet available, I built the pipeline to the best of my ability without it.
I also spent a good amount of time researching tools for when the data is available. One of the tools required is an annotation tool. This will allow me to draw bounding boxes over the data images and set labels using a GUI. Since I there are 10k images we hope to get, an annotation tool can significantly speed up the labelling process which could be a bottleneck going forward. In my research, I found LabelImg (https://github.com/tzutalin/labelImg) which seems to be the best annotation software because it is compatible with the PASCAL VOC format that is required by YOLO.
With reference to the machine learning, we are on schedule. I can start working on the next big steps once I have access to the data.
References:
Pipeline:
https://medium.com/oracledevs/final-layers-and-loss-functions-of-single-stage-detectors-part-1-4abbfa9aa71c
https://www.curiousily.com/posts/object-detection-on-custom-dataset-with-yolo-v5-using-pytorch-and-python/
Annotations:
https://github.com/tzutalin/labelImg
https://github.com/ujsyehao/COCO-annotations-darknet-format
https://github.com/wkentaro/labelme
Introduction
Smart Library is a project that aims to scan a room(particularly a public space) and indicate how many seats are available for use. It also aims to analyze weekly/daily patterns of when seats are usually available, as well as take into account social distancing during the pandemic. Ideally, this will be placed in Sorell’s Library. The users can interact with the information through a website.