This week, my group met with our TA, Joel, and Bryon to refine our abstract and pinpoint the finer details of our implementation. Taking on their recommendations, we decided to change our use case, specifically, to security/video streaming. Allowing to up-scale videos on demand in real-time allows for greater security and better decision-making when the user is presented with potential threats. Also, client-side upscaling of videos can make up for poor internet connection. After considering the throughput and use case, we also decided to go with 24fps as a target instead of 60fps, as this is more realistic whilst still perceivable as a video to the user.
As Byron suggested, we examined existing upscaling methods used in browsers more in-depth, as well as reading up on some DSP literature that he sent us. We decided that neural net methods were indeed more useful for our implementation, and we are in the process of figuring out how to fit this architecture onto an FPGA.
For the coming week, we will further develop our schedule, and also confirm how we will procure the key components of our project. We will also setup team infrastructure such as Github to ensure we can coordinate our progress better. Overall, we are on schedule and ready to progress to the next week.