Mitul’s Status Report for 3/26

This week I continued working on the server architecture local implementation. I was able to get a working version of the multi-node data flow that can be redirected at different stages. I do this by having a specific node (in this case, the front end node) relay information to one of two back end/database nodes in order to begin a new data flow. In Node.js, it was even possible to have a fragmented video stream in which the data is sent in small chunks of about 1 MB that are relayed from database to web server to browser. There remains some debugging to do in the case of scrubbing (moving the current time instance to another).

In working on this locally deployed application, I noticed that for our intended architecture, there is exceedingly little additional functionality for our back end server tier when compared to our database tier. On that end, I observed that our intended virtual machine specification for the database nodes will likely the demands for both back-end and video storage functionality at once. Thus, I proposed to the group that we condense the back end and database server tiers into one comprehensive database access tier the idea was well liked. This change would streamline our server for greater speeds and reduced deployment costs.

Nakul’s Status Report for 3/26

This week I completed development of the video streaming server using Node.js and express. The video server sends 1MB chunks of the video to the front end through the response object. The browser then renders this chunk of video and displays it in an HTML video element.

The server also allows for users to scrub through the video and only sends the required chunk of video that needs to be streamed. This range that the user requests is in the request header sent from the browser and is read in the backend to determine what number of bytes need to be sent back to the browser. The next steps will be to integrate MongoDB so that the web server can act as a database server which holds multiple video files. Through my research I found out that MongoDB stores “pages” of data and thus video files would need to be broken down into several chunks of these pages.

Jason’s Status Report for 3/19

Over the past two weeks, on top of discussing the transition from Tornado to NodeJS for the implementation of our web application with the team, I mainly began to explore how we would be using AWS to service our application when we have achieved our basic implementation.

I did some research and found some tutorials in both website and video form on how to implement a multi-tier architecture on AWS, including what instances to use for different purposes. After setting up some basic connections between instances in the architecture, I started to send some test requests through these connections to check that they are functioning properly. While experimenting with the load balancer that AWS provides, I gained some insight into how it connects with the rest of the architecture and which values it monitors. I will be communicating these insights with Mitul to help with his implementation of our own load balancer.

In the coming week, I will be looking into how to set up a monitor node that receives and parses data from both the front end and the back end in order to gain a more comprehensive perspective on the performance of our load balancer.

Nakul’s Status Report for 3/19

This week we met as a group to discuss the future plans for our project. Since we were behind schedule we created a new plan and breakdown of tasks so that we can have our MVP ready for demo time.

I worked on building the front end of the video streaming server. I designed a basic HTML layout and deployed it using node.js. In the coming week I plan to work on deploying the streaming component of our video server. Through my research I have found a suitable approach to streaming chunks of the video file to the web server and will have it implemented in the next week.

Team Status Report for 3/19

We met as a team several times to better detail our respective programming roles for the application as well as our integration plan. On implementation, we narrowed down and began work with NodeJS for our web framework, React.js for front-end served content, and Django Python for user simulation. We also worked together on some initial debugging of some aspects of the application that differ from tutorial bases.

We ran into some design concerns regarding the role of the front-end tier for the video data stream that goes from database to user (whether it is part of the chain or is circumvented). For now, we will keep it part of the chain as this will keep our data flow more regularized between nodes. However, we recognize a potential to change our VM choice for frontend nodes during the deployment stage accordingly. If front-end nodes handle a comparable load to the back-end, they may need to better match the capabilities of the backend.

Over the next two weeks, we plan to integrate our application-side work. We expect the minimum viable product will be a sufficiently complete environment to start implementing and creating testing modules for the traditional load balancing algorithm nodes.

Mitul’s Status Report for 3/19

Over the past three weeks, I worked on an implementation prototype for our server architecture. I researched and found more implementation tutorial content, in both video and website form, on NodeJS over Tornado to use as a web framework technology. I presented this info to the team and we collectively agreed to switch our intended stack to NodeJS for convenience.

While Nakul worked on the high-level aspects of the application (e.g. the video player), I delved into the internal structure of how our server architecture can send and receive data. I learned about Express, NodeJS’s internal back-end framework, which was useful in creating data streams that can send small pieces of data in sequence. I experimented with this feature to create simplistic data streams across nodes that can be rerouted manually. I am now working on a smoother process for ensuring the correct data can be sent to specific nodes.

After that Nakul and I will work on integrating our current work into a unified codebase. We expect to create an application that can deliver a functional, locally deployable multi-tier video stream. Though the tiers will likely be nominal, featuring roughly two nodes within each, they can show a proof of concept for the load balancing task. The small app can function as an early, local test for our traditional and more easily implemented load balancers. The data stream could be rerouted between the few nodes randomly or in order (RandomLB and RoundRobin respectively).

Team Status Report for 2/26

This week we met up as a group twice to discuss some additional potential risks in our project. In particular, we discussed some external advice regarding the pitfalls of video streaming servers and how they may not work well our accustomed technology suite (Django). Therefore we looked into alternatives for both the application area as well as overarching web server frameworks that can be good for us.

Over the next week, we plan to consolidate that understanding into our initial AWS deployment and connectivity tests. Furthermore we will complete the majority of the design review report that was started this week. We expect to have a tentative deeper technology suite for the overall project in the design report; we will adjust that plan with further testing.

Mitul’s Status Report for 2/26

This week we got our AWS credits so we are ready to start initial deployment and connectivity tests for the following week. In preparation for said tests, I watched some tutorial videos and read some articles regarding multi-tier server deployment on AWS. In particular, I found this technology suite regarding AWS article insightful: https://aws.amazon.com/vpc/ . Virtual private cloud would allow for various high level tasks such as creating and integrating new nodes as well as tiers for said nodes; this would effectively give us centralized control and some preliminary monitoring for our overall server architecture.

I also did some further research on our possible database implementation. AWS offers specialized node instances for database storage and our naive solution (for the MVP) is to have several nodes that each store our full database archive. We plan to make the video catalogs on the databases all read-only during active deployment as we find this will make any distributed system more straightforward to implement. We will consider a more dynamic database solution if this leads to storage overruns during later testing.

I also created a shared document for the design report imported our presentation content to act as a starting template. We will continue to work on this report for the upcoming week.

Jason’s Status Report for 2/26

While waiting to receive our AWS credits this week, I spent most of my time researching alternatives to Django as the web framework we would use for our web application since while the simplicity of Django allows for easier and rapid development, it is not as suitable for supporting long-lived connections such as a video stream since it is not asynchronous. Some alternatives to Django that I found include Nginx, Node.js, and Tornado. In particular, Tornado, while not having nearly as many powerful and robust abstractions as Django, can support many open connections that make it ideal for applications that require long-lived connections such as the video application we plan on implementing.

Since we received our requested AWS credits on Friday. This coming week I will start setting up test connections on AWS while also hashing out the details for the design review due on Wednesday. Then, over spring break, I will be setting up the backend of the web application in order to begin testing on our load balancer.

Nakul’s Status Report for 2/26

This week I spent my time looking into the development of our web server. I had a meeting with my web application development professor, Jeff Eppinger. We spoke at length about 3 tier architectures as well as developing a video streaming server. From our conversation I found out that Django is mainly designed for HTTP requests and would not be an ideal design choice for the development of a video streaming server as Django Channels (Django’s framework for web sockets) is pretty hard to use and not that efficient.

As a group we discussed the potential of switching our backend server to another framework that is better suited for web sockets that would be required to stream video. We are currently in the process of researching different backend frameworks that would be ideal for running a video streaming server.

This coming week we are going to do further research into the 3 tier architecture and deploy a sample project on 3 tiers using AWS EC2 instances as we recently received our credits.