This week, we went to class and peer reviewed our classmates’ presentations, as well as receiving feedback and Q+A from our peers, TAs and professors. I also followed our schedule and completed my main tasks of familiarizing myself with the VMAF documentation, as well as sorting out the details of our model selection/training with my team during meetings. We discussed specifically the dataset that was best for our training, and also our backup plan of using an existing algorithm if our CNN doesn’t end up working within a reasonable amount of time, since the main portion of the project is hardware – specifically, the implementation of writing the algorithm onto an FPGA to hardware accelerate the upscaling process.
Looking at the VMAF documentation, I confirmed that the current version has a wrapping Python library that has good documentation. I had initially decided to use Python since it was the most intuitive option for ML, and I could also have Kunal helping with the beginning of the algorithm development, since he also has some experience with ML, and is eager to be involved. Also, the Github on VMAF has some very useful scripts that can be used during our training process, and I also briefly considered the sample datasets that they provided, but I discussed with my teammates and we decided that the dataset from the CDVL (Consumer Digital Video Library) fit better, since it had a greater variety of videos, such as animation.
For next week, I will begin to work on the model development in Python with Kunal, as well as working on the design presentation as a team. I will have to further consult the VMAF documentation, as this is my first time using it, as detailed on the schedule.