Network adaptive realtime video player and decoder
Salman Yussof - yussof@andrew.cmu.edu
Diego Iglesias - iglesias@andrew.cmu.edu
Project Description:
--------------------
We are going to take the CMU H.263 v2 codec and use it to implement a
network adaptive realtime video player and decoder. So what we would
have is a transmitter which would send a compressed bitstream across the
network and a receiver which would decompress the bitstream and play the
video on fly. We would, however, pre-encode the bitstreams before the
transmission is done. During transmission, the player/decoder would send
feedbacks to the server (transmitter) once a while to tell the network
condition (number of packets dropped ) and the quality of the decoded
video. Using this information, the server would change the amount of
bits sent using something called scalability.
Scalability is the ability to encode the video in such a way that the
bitstream contains both the information to reconstruct a coarse frame
and finer versions of the frame. In our case, according to the condition
of the network, we could decide whether to send only the coarse
bitstream or also the enhanced versions of the bitstream. This way, we
would be able to maintain a constant frame rate at the player side by
transmitting less bits if the network is congested.
There are three different types of scalability which are:
- PSNR scalability
- spatial scalability
- temporal scalability
In our project we would experiment with these three types of scalability
and see which one of them is suitable to be used in a network adaptive
application. We would also try a combination of them and see if that is
better than individual ones.
->Implementation details:
We would take the CMU H.263 codec and modify it to enable the
scalability ability. We also have write our own yuv player on unix. This
yuv player needs to be integrated with the decoder and the network
receiver so that it could receive bitstream from the network, run it
through the decoder and then display the resulting video. The
transmitter(server) can be implemented independently since the
bitstreams are pre-encoded.
After we are able to transmit the transmit the video, decode them and
display the properly, we would give the player an ability to give
feedbacks and the transmitter should react to these feedbacks by
choosing the right bits to be sent. We would then simulate multiple of
these transmitters and receivers running at the same time and test
which scalability types can perform better.
->Project milestones
Midterm:
- Create the YUV player and both the transmitter and receiver.
- Integrate the player with the receiver and decoder so that it could
play a video stream on fly
- Be able to show that we could play yuv video across the network
Final:
- Provide the scalability capability
- Provide the feedback capability
- Make sure that the bits are correctly send and decoded given a
particular feedback
- Simulate multiple of these transmitters and receivers running at the
same time and test with each scalability types.
- Be able to show the resulting decoded video with each of the
scalability types and try to see if any of them is better than the
other.