Joshua’s Status Report for 10/30/21

For this week, I set out to accomplish two main tasks – Addressing our problem with AWS, which would allow the training process for our CNN to begin on time, and also expanding on our previous research on pre-trained models. This is to make sure we have one ready to use if either our software model either isn’t ready on time or that it fails unexpectedly, so that we can still submit a final, working product.

In terms of the research, I worked with James to research various pre-trained models online. As we had found out initially, a lot of pre-trained models that are based off the paper we are using don’t follow the method stated in the paper exactly, and use a lot of filters instead of CNNs as their main upscaling method. A surprising number of them also throw in filters such as line-sharpening and and anti-blurring filters, which greatly increase computational time and hence cannot be realistically done in real-time. We eventually found an open-source version of the SRCNN implementation on Github in Python, which uses a CNN, but is only rated for up to 4x the upscaling. This will detract slightly from our initial goal of 4.5x upscaling, which we had determined to be achievable, but it would still be viable to be put on hardware to show the acceleration that is possible from our FPGA implementation. The dataset they use is different to ours, since it was mainly used on still images instead of key frames of videos, but it is still a relevant dataset as it has the characteristics we chose for our dataset – variety of shots, close-up vs zoomed out, nature vs still objects etc.

To address the concerns with AWS, immediately after our meeting, Kunal double-checked his AWS and found that the request had actually already been approved – he had just missed it. Despite this, the request came back insufficient, as the wrong number of vCPUs had been provided to allow us to use our chosen instance – a P3 instance required 8 vCPUs, whereas Amazon only provided us with 1. After following up on our initial request, they replied within 2 days, stating that they did not have enough resources currently to provide us with the vCPUs needed for a P3 instance, and instead recommended us to go with the G4 instances, which we had actually looked at previously and was our second-best choice.

Concurrently, I also attempted to use Google Colab after the advice from Joel, and there were two main problems – as Joel had mentioned before, the free version turns off after some period of time has passed without any activity, which is a problem. Another big issue is that the storage was very limited and couldn’t fit the dataset we had chosen, which was close to 100 GB. As we were on a tight schedule, I bought the paid version for $10 without requesting, which addressed the concerns, upping the storage to around 180GB making it more than sufficient. The code was running fast enough – after ironing out some bugs, I estimate the model to be fully trained by around Wednesday/Thursday this coming week. Since the code runs well enough on Google Colab, we are no longer using AWS, as Google Colab is also significantly more convenient.

For the coming week, since my role on the software section will be completed, I will be helping James and Kunal where necessary for the integration process.

Leave a Reply

Your email address will not be published. Required fields are marked *