Pacing, Pading and AutoMuter

In the last months those terms have been appearing often in the WebRTC codebase and discussions.  In this post I will try to give a very brief description of the meaning, behaviour and implementation status, and attach some comments or source code that can be helpful to illustrate those features.  

Pacing or Smoothing
When transmitting video it is common to have peaks of traffic when sending a video frame because it is consisting on lots of RTP packets (from 1 to even 50 in case of a key frame) that are sent in the same instant of time.   The solution to alleviate that problem is to add some spacing between those packets or even between frames if needed.    This is known as pacing or smoothing and in case of the Google WebRTC implementation it is enabled by default in the latests versions. 

  // Set the pacing target bitrate and the bitrate up to which we are allowed to
  // pad. We will send padding packets to increase the total bitrate until we
  // reach |pad_up_to_bitrate_kbps|. If the media bitrate is above
  // |pad_up_to_bitrate_kbps| no padding will be sent.

Padding 
The bitrate during a videoconference can be very dynamic, for example being very low if the camera is pointing to a wall and increasing inmediatelly if a person starts moving.    This behaviour can be a problem with networks that take some time to adapt changes in traffic (specially mobile networks) and can be alleviated by keeping a constant bitrate by sending fake packets when there is no enough actual data to maintain that bitrate.   This is very easy to do by using RTP padding with random data but can be done even better by adding redundancy or retransmissions so that those packets can convey useful information in case of packet loss.

 int RTPSender::BuildPaddingPacket(uint8_t* packet, int header_length,
                                   int32_t bytes) {
   int padding_bytes_in_packet = kMaxPaddingLength;
   if (bytes < kMaxPaddingLength) {
     padding_bytes_in_packet = bytes;
   }
   packet[0] |= 0x20;
   // Set padding bit.
   int32_t *data = reinterpret_cast(&(packet[header_length]));
   // Fill data buffer with random data.
   for (int j = 0; j < (padding_bytes_in_packet >> 2); ++j) {
     data[j] = rand();  // NOLINT
   }
   // Set number of padding bytes in the last byte of the packet.
   packet[header_length + padding_bytes_in_packet - 1] = padding_bytes_in_packet;
   return padding_bytes_in_packet;
 }

Suspend Below MinBitrate or AutoMuter 
Under some network situations it is not possible to offer a good video quality and it is better to completely disable the video and keep only the audio stream.   This can be decided based on the bitrate estimation (that is usually calculated using packet loss and REMB packets) when it goes below or above a threshold (this threshold is the minimum bitrate for that codec and can be around 50Kps in a default WebRTC endpoint).  This is partially implemented but not yet activated in WebRTC browsers. 

 void MediaOptimization::CheckSuspendConditions() {  
   // Check conditions for SuspendBelowMinBitrate. |target_bit_rate_| is in bps.  
   if (suspension_enabled_) {
     if (!video_suspended_) {
       // Check if we just went below the threshold.      
       if (target_bit_rate_ < suspension_threshold_bps_) {
         video_suspended_ = true;
       }
     } else {
       // Video is already suspended. Check if we just went over the threshold
       // with a margin.
       if (target_bit_rate_ > suspension_threshold_bps_ + suspension_window_bps_) {
         video_suspended_ = false;
       }
     }
   }
 } 

Comments

Popular posts from this blog

Bandwidth Estimation in WebRTC (and the new Sender Side BWE)

Controlling bandwidth usage in WebRTC (and how googSuspendBelowMinBitrate works)

Using Native WebRTC simulcast support in Chrome (or how to be as good as Hangouts) [WiP]