Yesterday, the talented folks at Double Robotics rocked the stage at LeWeb in Paris. David Cann, Co-founder and CEO, demoed their telepresence robot, Double. The sleek Double combines Segway-style movement with video presence delivered through an iPad.
Leveraging OpenTok’s iOS SDK on WebRTC, Double enables owners to broadcast their video stream on the robot’s iPad from any location. Not only that, but the operator can stream and control its movements from a smartphone.
Cann demoed the Double’s original inspiration, which was to connect remote workers with their home offices. Calling his CTO back in Sunnyvale, Cann took their Double for a three a.m. spin around the office from Paris. Customers are also finding uses for Double for remote tours of factories, museums, schools, and retail outlets.
With all the excitement going on with webRTC and iOS interoperability, I’m sure many are excited to get started. If you don’t have time to navigate through the docs, then you’ve come to the right place. In this article, I’m going to show you how to get started! If you didn’t know already, webRTC is a new HTML5 spec for interactive media streaming on the web.
Browser to Browser
A new standard is making its way into web browsers and other clients around the world over the next few months that will likely change the way that we communicate with each other. WebRTC (Real-Time Communication) is a set of protocols and technologies that have been proposed to allow modern web browsers (currently Chrome 23 has support) to embed live audio/video communications without a plugin like flash.
Over the last few months we’ve been hard at work on a new variant to our iOS Video SDK, which we’re dubbing the OpenTok WebRTC for iOS SDK.
In the world of video WebRTC is a really big deal. The quality increase we’ve seen in WebRTC video versus our current Flash SDK is pretty phenomenal. For instance, video latency is typically less than 250ms under most network conditions. This is important to maintain a flowing conversation and avoid talking over other people on the call. Video quality is also noticeably better. The framerate and resolution are higher and adjusted dynamically over time to take advantage of the bandwidth and device capabilities that’s available between the clients.
Back in March of this year, TokBox launched a new SDK for its video platform that took the power of live, face-to-face conversations and brought them to the iOS platform (think FaceTime but as an API). This SDK has been essential to our ecosystem as it has helped our partners to create new iOS applications as well as bring new value into existing applications by adding live video. We’ve seen some fantastic use-cases take shape over the last few months. Some, perhaps obvious and others that are pushing the limits of new video use-cases.
Today we’re taking real time video on mobile by storm with the launch of our PhoneGap plugin. Don’t want to code in a statically typed language (Objective-C)? We got your back.
For a long time we’ve provided a video chat API for web apps and we’ve seen interesting applications. Remote photo-booth, online collaboration, consultation apps, you name it!
We’re happy to announce that we’ve released a new iOS SDK binary full of some critical bug fixes, feature enhancements, and support for the iPhone 3GS.
To get started, head over to our GitHub repository.
To learn more about what new features are available, read on.
Several partners have been asking us about the options around getting access to media streams as they come and go from an iOS device. While more robust media access features are further off, I wanted to take some time to explore the options an iOS developer can play with today.
The UIKit view hierarchy integrates with a fairly simple animation and compositing API. Every instance of UIView is backed by an animation layer (CALayer), which can be accessed (and manipulated) without much complexity. A neat thing about CALayer is that you render its contents at any time using the
renderInContext: method. Most often, your render target is the window, which is managed by the UIKit view hierarchy, so none of this knowledge is particularly compelling. Unless of course, you wanted to render the contents of the animation layer to a bitmap in memory to perform, say, facial recognition with the iOS 5 CIDetector.
In march we announced our iOS Developer Contest to celebrate our OpenTok iOS SDK.
Today we officially announce our winners! Out of all the super awesome submissions, 3 teams will be receiving an iPad 3. Without further ado, here are the winners:
1) An iPad3 goes to Romotive! Romo is a robot that you attach iphone/ipod into, and you will be able to control it from the browser! You can also broadcast your video onto the iphone and see what Romo sees on your browser. Welcome to the new age of telepresence! These guys are serious, you can order their robots Today. Read their blog!
eHarmony Love Doctor, we know you have a foolproof matching formula. But there is something seriously missing from the magic equation that has worked pretty well since the dawn of time: the spark check. Photos, asynchronous messages and “winks” all have their place in the online dating scene, but our partner Date.fm is taking it one step further.
They realized that the real-world dating cues are totally lost in translation when you bring the experience online. Face-to-face interaction is important. Primarily to verify there is a spark, and secondarily to verify that their Prince Charming, who looks like Brad Pitt in his profile picture, doesn’t actually look more like Willem Dafoe. No offense to anyone who finds Willem Dafoe attractive.
The OpenTok iOS SDK lets you use OpenTok video sessions in apps you build for iPad, iPhone, and iPod touch devices. This means you can use OpenTok video sessions that connect iOS users with each other and with web clients.