Posts

Showing posts from 2013

Pacing, Pading and AutoMuter

In the last months those terms have been appearing often in the WebRTC codebase and discussions.  In this post I will try to give a very brief description of the meaning, behaviour and implementation status , and attach some comments or source code that can be helpful to illustrate those features.     Pacing or Smoothing When transmitting video it is common to have peaks of traffic when sending a video frame because it is consisting on lots of RTP packets (from 1 to even 50 in case of a key frame) that are sent in the same instant of time.   The solution to alleviate that problem is to add some spacing between those packets or even between frames if needed.    This is known as pacing or smoothing and in case of the Google WebRTC implementation it is enabled by default in the latests versions.     // Set the pacing target bitrate and the bitrate up to which we are allowed to   // pad. We will send padding packets to increase the total bitrate until we   // reach |pad_up_to_bitrat

OpenH264 Cisco code released

Finally Cisco has released the source code of the announced open source H.264 implementation: https://github.com/cisco/openh264 Code looks promising although still work in progress (not every platform supported yet) and the features included in encoder and decoder are impressive (simulcast, resolution, quality adaptation...). The binary module (the compiled version of that code) is not yet available, but apparently Mozilla could be already working on the integration in Firefox. This is a great video explaining how is this code and binary module going to work and the implications of that behavior: http://vimeo.com/79578794 These are obviously good news although I still disagree on the inclusion of H.264 as mandatory to implement codec for WebRTC.  The web can not be based on technologies requiring royalties to be used even if Cisco is generous to pay for it for some platforms during some amount of time. BTW, I still maintain my bet of not having a MTI codec in WebRTC.

IMS WebRTC gateways

In the last two years I've been attending conferences and presentations where traditional telco equipment providers try to sell what they typically call WebRTC gateways. I already mentioned this issue in another post last year ( WebRTC facts and lies ), but in this case I will try to explain in more detail why from my perspective the concept of WebRTC gateways itself is wrong, why I see it just as a new attempt from vendors to sell as much boxes as possible and why there is a much more better approach for a telco in my opinion. There are some recurrent misconceptions about WebRTC that are impacting the decisions made in this area.  Let's try to clarify them before discussing the proposed approach: WebRTC is not about signaling at all while IMS (and specially SIP) is mostly about signaling.  WebRTC only defines how media is transmitted between endpoints.  Then it makes not sense to talk about WebRTC to SIP gateways or WebRTC to IMS gateways .  You can use SIP with WebRTC

Redundant RTP coding

There are different mechanisms to make media transmission more robust to packet loss.   Some of those techniques are negative acknowledgments and retransmissions, sending redundant information for forward error correction and signal processing algorithms to reduce the impact of packet looses. In case of audio codecs with the advent of OPUS most of packet lost issues disappear due to the robustness of the codec.  You can test yourself the quality with 30% of packet loss!!! [1] Where those techniques makes more sense are with video codecs because they are typically more fragile to packet loss and this is specially critical when sending key frames that can impact communication for a long period of time (until next keyframe). This post is an overview on how redundant encoding works at RTP level and what are the implications of it. The core idea is to modify RTP packets format so that instead of including only the primary content payload (video encoded content) it can also include e

SRTP Introduction

TCP vs Push Notifications

Analyzing existing RTC mobile apps from a technical perspective we can appreciate a clear dichotomy in the solutions being used today to maintain the communication channel between the apps and their respective servers to signal messages, calls and any other communication event. In one side we have those apps maintaining a permanent TCP connection established between the client and the server, this is the case of Skype, TU Me or Google Talk. In the other side we have those apps not maintaining a permanent TCP connection but using push notifications provided by the operating system (APNS in iOS and GCM in case of Android) to alert the applications of incoming events like messages or calls.   This is the case of WhatsApp or Viber. Just looking at the user experience second group of applications tend to consume less battery ( because maintaining a TCP connection usually requires periodic keepalives) and tend to be more reliable (because solutions more tightly integrated with operat