So this week consisted of a lot of research for me, I started off by looking at most of the relevant resources from the original paper we are building our CNN from, project website found here. The first thing I looked into was building a regular CNN that would take in multiple images, in this case burst images, and then feed them to a CNN and train it to deblur them. This paper gave me a better structural understanding of how this a CNN that takes in multiple images would run. After reading most of this paper and getting a good understanding of these image processing CNN I went to the next paper, which talked about the encoder/decoder CNN architecture. My understanding of thi style is that a particular CNN takes in an image and to speed up the process it shrinks, or in the case of this paper, down convolves the image to make it smaller, and then learns on this smaller image, and after it is done it up convolves the information giving a final result of the same size image that was input. Finally after messing around with those two ideas I also took a look at the Generative Adversarial Network architecture (GAN) which I am not super familiar with but, our TA said this may be a better approach than using the encoder/decoder style. 

 

Links to papers

https://arxiv.org/pdf/1607.04433.pdf 

http://www.f.waseda.jp/hfs/SimoSerraSIGGRAPH2016.pdf 

https://arxiv.org/pdf/1504.06852.pdf .


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *