This week, we met up with Byron and Joel again to discuss more about our project, specifically to address any comments from the feedback from our proposal presentation, as well as following up on our initial, first meeting. During the meeting, we addressed the concerns about the use of VMAF as a metric for our training, as well as our dataset and some other things that weren’t fully justified during our presentation. Byron commented on how we have to make sure that implementing a CNN is better compared to traditional DSP methods, and to make sure that implementing something much harder is still the best choice. To that end, we benchmarked both VMAF and Anime4K, a project on Github that does something similar with, specifically, animation, and we obtained concrete, quantitative measurements which we can elaborate on our design presentation to fully justify our design choice.
Joel also raised a good point about how upscaled, lower resolution videos compared to original, native resolutions videos would always result a lower score, and we addressed that by limiting our training to only comparing videos that have been upscaled to the native resolution, e.g. 1080p to 1080p. We also talked about the importance of benchmarking as soon as possible, which we successfully did this week.
Although throughout the week our team members were slightly overwhelmed by work from other classes, we managed to catch up sufficiently by meeting up after class hours and communicating to make sure our tasks were still completed on time. James and Kunal continued their research on I/O, and calculated specific quantitative measurements to put on our design presentation, and I continued my research into VMAF, as well as the model being used for training our upscaling. Referring back to our Gantt chart/schedule, we were slightly behind on developing the Python code for training our own CNN, as we only received AWS credits Friday morning, but we used that surplus time efficiently by benchmarking locally, as well as researching in more detail Anime4K. As per the feedback from Tamal, we are taking the risk of our CNN not working/not being developed in time on the software side more seriously, and our backup plan would be to simply use the CNN implemented in Anime4K and start implementing that on hardware if we cannot get it working on the software side after Week 7. We’ve changed our schedule/Gantt chart to reflect that accordingly.
Looking further into the peer/instructor feedback, we see that they were a lot of comments about the absence of justification for our FPGA selection during the proposal presentation. We’ve focused on elaborating on the choice much more for our design presentation, and we are similarly going into a lot more detail for our software section, as well as our quantitative requirements.
Overall, despite some things not going as planned this week, we believe our team was very successful in overcoming all the problems we encountered, and our initial planning, which allowed for slack time/small delays, proved useful. We look forward to delivering our well-prepped presentation on Monday, addressing all feedback from our previous one, and continued success towards the progress of our project.