Many of our partners eventually find themselves asking how to tell whether their users tend to experience good quality while using the OpenTok Platform. As time has taught us, this can be a difficult question to answer. The most common source of complaints stem from underwhelming audio/video (A/V) quality between endpoints. These complaints are nearly always rooted in issues with performance of the endpoint network. The correlation between network performance and A/V quality has been accepted as an industry standard. In fact, we have built tools to expose network performance data, as a proxy indicator of subjective quality. While objective data about a network may be easy to collect, it is much more difficult to assign a number to represent the quality of experience that a user subjectively experiences.
Today, it is easier than ever to get involved in real-time communications, using WebRTC. For those considering investing in a mobile strategy, the technology ecosystem has never been more ripe for rapidly integrating WebRTC into your application, or even starting a project afresh. Using existing build tools like Cocoapods to get started quickly with the OpenTok iOS SDK, and the new CallKit framework introduced in iOS 10, now is the best time to jump in and start building.
Lately, we have been thinking and talking about broadcast in multimedia. By now, you might have seen that TokBox is powering applications that go beyond the contemporary one-to-one and small group settings that are typically associated with the current generation of WebRTC apps, to a much larger scale of hundreds or even thousands of people watching and participating in the conversation. At a glance, this might not sound particularly groundbreaking; video has been distributed to large audiences for years. However, a closer look is necessary: with a shift in the underlying technology, TokBox adds the option of real-time communication to the existing large-audience reach of broadcast video, to enable a whole new class of applications.
Today, we are excited to announce that version 2.3.0 of the OpenTok iOS SDK is available to our developers.
In addition to support for iOS 8 and Xcode 6, we also want to share details about additional new mobile features which we’ve outlined below.
- Build Voice-optimized experiences: Audio Levels API and UI best practice example. See more.
- Audio Driver API: implement custom Audio I/O in your app.
- Support for the armv7s architecture.
- Support for the iOS Simulator.
- Intelligent Quality Control features:
- Video recovery from audio-only fallback.
- Connection Quality API: a warning callback to notify that audio-only fallback is eminent.
- Audio-only fallback redesign.
Learn more about OpenTok iOS SDK 2.3.0 here.
Last night TokBox released patches to the OpenTok iOS and Android SDKs to resolve a recently identified OpenSSL vulnerability that affects the majority of web service providers.
‘An attacker using a carefully crafted handshake can force the use of weak keying material in OpenSSL SSL/TLS clients and servers. This can be exploited by a Man-in-the-middle (MITM) attack where the attacker can decrypt and modify traffic from the attacked client and server.’
In the latest versions of the OpenTok SDKs for iOS and Android, everything is new. We found an opportunity to learn from the lessons of the past two years, and seized it to conduct an overhaul of the architecture of the client. The 2.2.0 release of the iOS and Android SDKs marks the second major revision of the implementation of the OpenTok Mobile SDKs. This post highlights one of the many new features of the 2.2.0 SDKs, about which we are feeling particularly excited: the “Video Driver”. Although the feature exists with parity in both platforms, today we’ll focus on the iOS-variant of the new API.
Nearly 7 months ago, we publicly announced that the OpenTok API would extend its reach to native mobile application developers by publishing the OpenTok iOS SDK. In the time since, we have tightened the performance of the SDK runtime for iOS devices and spent a good deal of time learning about how best to deliver video to the mobile platform. While iOS commands a large portion of the mobile app market, it is intuitive that we should build similar SDKs for other popular platforms outside of the browser. It is a pleasure to announce that we are developing the OpenTok Android SDK, to allow native Android developers to bring live video chat to their apps.
Several partners have been asking us about the options around getting access to media streams as they come and go from an iOS device. While more robust media access features are further off, I wanted to take some time to explore the options an iOS developer can play with today.
The UIKit view hierarchy integrates with a fairly simple animation and compositing API. Every instance of UIView is backed by an animation layer (CALayer), which can be accessed (and manipulated) without much complexity. A neat thing about CALayer is that you render its contents at any time using the
renderInContext: method. Most often, your render target is the window, which is managed by the UIKit view hierarchy, so none of this knowledge is particularly compelling. Unless of course, you wanted to render the contents of the animation layer to a bitmap in memory to perform, say, facial recognition with the iOS 5 CIDetector.
We’ve been working on this project for a few months and are pretty excited to showcase how it’s made and what it can be made to do. I’d like to share some stories that happened along the way.