James’s Status for 10/30

Since the problems we were having with AWS were reaching critical path for the completion of our project this week, I helped Josh look for alternative pre-trained models in the case that AWS/training fell through. While there do exist pre-trained models, many of them would not be exactly what we would need for our use-case. The pre-trained models we found were ‘rated’ for up to 4x upscaling, meaning that their performance would degrade for the 4.5x scaling factor that we will be using. Additionally, we found many models had extra layers of DSP preprocessing which we did/do not plan to use. In this case, if our hand were forced to use a pre-trained model, we have settled on an open source version, found on github that implements SRCNN without the extra preprocessing, knowing that this means that we may not be able to attain the picture reconstruction accuracy we originally set out to do (since the model will only have been trained to support good restoration up to 4x).

This week I also further helped Kunal ramp on host-side programming for the U96 board, and pointed him in the direction of various resources so he could get started on its implementation.

I also set up a git for us to use for the U96 vitis project. As of now it only has the vector-vector addition template example as an aid to Kunal to get him started on programming the host. I tried making further incremental gains on the CNN kernel, but was unable to realise any more this week. On the bright-side, I was able to rule out a good few different strategies for speedup, so the design space is, at the very least, still converging. I think that Kunal should be pretty much fully ramped by now, and so I should have more time this coming week to further explore the design space for CNN acceleration.

Leave a Reply

Your email address will not be published. Required fields are marked *