Nakul’s Status Report for 4/30

This week I was in charge of creating the user test scripts which simulate the behavior of 3 different user types. The first user rapidly switches between all 3 videos on our streaming server, the second watches through the entire length of the video and then switches to the next one, the third watches 20 to 30 seconds of the video and then switches to the next one.

The test scripts were created using Apache Jmeter which is a software package for recording HTTP requests to simulate load. Using their HTTP test script recorder, I was able to record my interactions with the browser to create the test script. The script recorder creates a proxy server to intercept requests being sent in and out of web browser running our video server. This allows me to simulate the 3 different user classes being tested.

I was able to create duplicate threads within the same jmx file. Jmeter allows multiple thread groups to run simultaneously. So I created a new file for randomLB which contains 3 thread groups for each user class and we can now run the tests on the same server at the same time. Duplicated the randomLB file for round robin LB and changed the target domain of all samplers to the roundrobin proxy.

This jmeter file was then run on blazemeter, a software package that allows us to run multiple concurrent users running the script.

This week I intend to complete collection of Network IO metrics retrieved from cloudwatch. I have set up the cw dashboard to collect network packets out data from video server – 1 to 5. I will need to figure out a way to collect this data asychronously in our LB proxy using the cw getDashboard or getMetricsData.

Nakul’s Status Report for 4/23

This week I worked on collecting OS metrics from our node video server. I had to test different APIs and modules to collect these metrics since some modules were not configured with our particular server deployment. Used node-OS-module to retrieve metrics on CPU usage and network IO. I found later that the node-OS-module netstat was not supported with my OS and so I tested it on different platforms including virtual andrew linux machine as well our deployed EC2 instances. Since this approach was not working for our video server, I started looking into linux netstat which logs metrics on network IO. I was not able to find an API that allowed for easy collection of network IO metrics and thus I finally started looking AWS CloudWatch which is Amazon’s custom metric logging tool for servers deployed on EC2. CloudWatch has a package to retrieve metrics from node servers which I will be using to fetch network IO metrics. the tradeoff of using CloudWatch is that it only allows us to collect these metrics every 1 minute as compared to collecting them every time a response is sent from the video server. This week I will be working on finishing up this metric collection as well as developing a system to visualize our User simulation data.

Nakul’s Status Report for 4/16

This week I spent my time working on user simulation scripts. The goal is to create python scripts that will mimic user behaviour on our video server so that we can test it’s performance as well as benchmark the performance of our load balancer.

I spent time learning about python requests library and created a script that can send requests to the RandomLB and RoundRobinLB instances that are deployed. After the request has been sent, I print the response object received from the server and isolate the HTML information.

Going forward I know need to calculate metrics such as response time as well as buffer length and bitrate. Even though these user end metrics are not going to be used for optimising our load balancer, they are crucial in testing the performance of both our video server and load balancer. Since our video server loads video files chunk by chunk, I will have to calculate the response time every second or so and then divide the total response time by the total size of the video to determine the average bit rate for that particular video file.

Nakul’s Status Report for 3/26

This week I completed development of the video streaming server using Node.js and express. The video server sends 1MB chunks of the video to the front end through the response object. The browser then renders this chunk of video and displays it in an HTML video element.

The server also allows for users to scrub through the video and only sends the required chunk of video that needs to be streamed. This range that the user requests is in the request header sent from the browser and is read in the backend to determine what number of bytes need to be sent back to the browser. The next steps will be to integrate MongoDB so that the web server can act as a database server which holds multiple video files. Through my research I found out that MongoDB stores “pages” of data and thus video files would need to be broken down into several chunks of these pages.

Nakul’s Status Report for 3/19

This week we met as a group to discuss the future plans for our project. Since we were behind schedule we created a new plan and breakdown of tasks so that we can have our MVP ready for demo time.

I worked on building the front end of the video streaming server. I designed a basic HTML layout and deployed it using node.js. In the coming week I plan to work on deploying the streaming component of our video server. Through my research I have found a suitable approach to streaming chunks of the video file to the web server and will have it implemented in the next week.

Nakul’s Status Report for 2/26

This week I spent my time looking into the development of our web server. I had a meeting with my web application development professor, Jeff Eppinger. We spoke at length about 3 tier architectures as well as developing a video streaming server. From our conversation I found out that Django is mainly designed for HTTP requests and would not be an ideal design choice for the development of a video streaming server as Django Channels (Django’s framework for web sockets) is pretty hard to use and not that efficient.

As a group we discussed the potential of switching our backend server to another framework that is better suited for web sockets that would be required to stream video. We are currently in the process of researching different backend frameworks that would be ideal for running a video streaming server.

This coming week we are going to do further research into the 3 tier architecture and deploy a sample project on 3 tiers using AWS EC2 instances as we recently received our credits.

Nakul’s Status Report for 2/19

This week I spent most of my time working on the design presentation with my group. I worked on researching about  the use of Django and NGINX to create a web application for our video server. I planned the best course of action to work on a web app considering that we do not have access to our AWS credits.

I also worked on researching different EC2 instance types as well as the tradeoffs between using each one. We have now narrowed down the instance type which will allow us to effectively create a multi tier architecture to simulate user testing and load.

I also worked on coming up with a testing plan for user simulation. I researched into user testing of web apps and came up with a testing plan for each quantitative metric associated with the front end of our application. I determined that running 30 devices (or VMs) sending between 1 and 50 requests every 10 mins will be appropriate for benchmarking each of the metrics.

Nakul’s Status Report for 2/12

This week was spent planning and creating our team’s proposal presentation. I contributed by researching the use case requirements of our web application as well as the custom load balancer. I read through research papers on Load Balancing to determine appropriate metrics that can be used to measure the performance of our custom load balancer. These metrics will also be used to define the loss/reward function of our machine learning algorithms. I also read a paper about different implementations of load balancers using different algorithms. This gave me more perspective as to how we should go forward designing our load balancer as well as the multi tiered architecture within which it will exist.

We have been slightly set back as we have not yet received AWS credits which would allow us to create a multi tiered deployment of our web application. We are on track with our schedule for collecting and reading through resources on load balancing and multi tier architectures.

Next week I hope to have started work on creating the video streaming application we will be using to test our load balancing algorithms. I also hope to deploy a sample web app on a multi tier architecture through AWS.