Team Status Report for 4/23

This week we met several times as a team to specify some important implementation details of our project. On testing, we adjusted our previous baseline of a custom built server that sent HTTP-requests to a plan to use a third-party load testing platform that could run user simulation scripts. This change was made to accommodate the need for realistic user request patterns (requesting video chunks according to the different buffer rates of videos with different specs (dynamism, resolution, etc.) , which would be difficult without direct HTML interaction. Furthermore, a third party would allow us to consolidate user-collected metrics into one platform that can compile into graphs and report this directly. We attempted debugging several different platform implementations (Flood.io, dot-com.monitor, BlazeMeter) and script types (Taurus, Selenium, JMeter) but do not have a fully satisfactory load testing setup yet. We hope to have that early next week.

We also adjusted our plan for video server metric collection. Some analysis of the impact of video server responses showed us that Network I/O is a much more likely bottleneck for our video servers than processor utilization so we now intend to find and retrieve that metric instead for a custom load balancing decider. The metric would either be found via asynchronous background monitoring in our own program or via API calls to an Amazon Cloudwatch monitor. Over the next week, we hope to integrate our testing environment fully and finish three custom load balancing deciders for comparison.

Leave a Reply

Your email address will not be published. Required fields are marked *