A new standard is making its way into web browsers and other clients around the world over the next few months that will likely change the way that we communicate with each other. WebRTC (Real-Time Communication) is a set of protocols and technologies that have been proposed to allow modern web browsers (currently Chrome 23 has support) to embed live audio/video communications without a plugin like flash.
Over the last few months we’ve been hard at work on a new variant to our iOS Video SDK, which we’re dubbing the OpenTok WebRTC for iOS SDK.
In the world of video WebRTC is a really big deal. The quality increase we’ve seen in WebRTC video versus our current Flash SDK is pretty phenomenal. For instance, video latency is typically less than 250ms under most network conditions. This is important to maintain a flowing conversation and avoid talking over other people on the call. Video quality is also noticeably better. The framerate and resolution are higher and adjusted dynamically over time to take advantage of the bandwidth and device capabilities that’s available between the clients.
We’ve seen some pretty exciting results under typical network conditions (broadband WiFi or 4G). The latency (or lack thereof) is instantly noticeable. The days of talking over people due to delays in video transmission are going away.
When we started looking into implementing peer-to-peer video on our iOS SDK, we were faced with some challenges. One of the big features that sets our platform aside from other video products today is the ability to interoperate between web browsers and mobile clients. We wanted to keep this functionality, but having it work with our current JS/Flash offering wasn’t a viable solution. We started working on a WebRTC implementation of our API back in July for Chrome and this began to open some doors for us.
Google has been one of the driving forces behind the WebRTC standard. They’ve made several acquisitions in order to leverage some key technologies in the standard. In doing this, Google has also open sourced the WebRTC source code. We decided this may be the best place to start when adding peer-to-peer video to iOS. Not only was it already implemented in Chrome – it was also being kept up to date for us by Google and the open source community.
That being said, this wasn’t exactly easy. We faced many challenges when we decided to get WebRTC running, let alone compiled, on iOS devices. The challenge was getting that very large codebase, which was not intended to run on mobile devices (at least yet) to compile for the iOS platform. This required modifying the WebRTC codebase fairly extensively to enable specific code paths that didn’t expect the iOS platform to be executing. Parts of the codebase, including an iOS implementation of the capture pipeline (getting frames from the camera down to the VP8 encoder) had to be built from scratch. Once we got it running, we faced some problems keeping it compatible with the version in Chrome (it’s a pretty rapidly changing codebase). For instance, the implementation of PeerConnection has changed (in a breaking fashion) at least a handful of times since we began this journey back in July.
Once we got it stable and started doing testing, we also realized that this SDK was not going to be able to run on older devices. The key reason is because the current iteration of WebRTC uses a different video codec than we use in our original iOS SDK (VP8 vs H.264). What this meant for us is that we couldn’t rely on hardware accelerated video encoding for this new SDK out of the box. All video encoding and decoding is now taking place in software running on the CPU. This unfortunately is out of our control at the browser implementation. However, with our iOS SDK, we have the ability to dig into the codebase and modify it to our liking.
Adding H.264 support between iOS devices is an effort we have already begun. Once completed, we’ll be able to rely on hardware H.264 video encoding between iOS clients which means the CPU consumption on those devices will go down significantly. At this point in time, we’re limited to newer, dual-core devices for our WebRTC stack (such as the iPhone 4S, 5, iPad 2, the new iPad and 5th Gen iPod Touch). This is something we’re very aware of as a pain-point for developers that want to add live video to their iOS apps. We hope that support for older devices (iPhone 3GS and iPhone 4) will come in the future, though.
We’re excited to release this new SDK (currently a new project on our GitHub page). It’s yet another option we’re offering to enable developers around the world to add live video to their iOS applications. This new Framework raises the bar for acceptable video quality and we’re looking forward to seeing even more use-cases take advantage of what it has to offer. Dive right into our docs to get started. You can be up and running your own version of FaceTime in less 30 minutes.