Last Thursday, I had the pleasure of being a part of the Mobile + Web developer conference held at the Hilton Hotel in San Francisco. I spoke on a panel about where development was headed in a world where Web + Mobile are the two predominant platforms. There were four of us total, and we had a great time talking about how each of us lived in, and viewed the future of development in this two platform world. The panel was composed of (beyond myself) John Hammink, a QA engineer from Mozilla, Jonathan Smiley, a partner at Zurb building their own HTML5 framework, and Ted Drake, a senior accessibility engineer from Intuit.
About six months ago, Microsoft released an alternative proposal to the W3C WebRTC 1.0 Working Draft, dubbed CU-RTC-Web. Like all W3C groups, the WebRTC Working Group enlists membership from a majority of the industry, including names like Nokia, Cisco, Google, and Mozilla. The most important question raised by the Microsoft proposal is how the Working Group would react to criticism of its draft proposal, and whether Microsoft would accept the published APIs of the Working Group, even if CU-RTC-Web is not adopted. So what exactly does this mean for the development community?
The Microsoft draft outlines a low-level API that allows developers more direct access to the underlying network and media delivery components. It exposes objects representing network sockets and gives explicit application control over the media transport. In contrast, the WebRTC API abstracts these details with a text-based interface that passes encoded strings between the two participants in the call. With the WebRTC draft, developers are responsible for passing the strings between communicating browsers, but not explicitly configuring media transport for a video chat.
PennApps hackathon was the largest college hackathon in the world and it took place this past weekend. It produced some of the best/most entertaining hacks that I’ve seen at any hackathon: Remote controlled battle bots, Automatic Wifi Authentication for facebook friends, enlarging media seamlessly from one to multiple mobile screens, app that messages you if you forget to put required items in your backpack, exploring neighborhoods from the comfort of your couch with augmented reality, just to name a few.
Looking back, I would say that this hackathon was a smashing success, and I’m sure the other sponsors would say the same. From my perspective as a developer evangelist, here’s why PennApps turned out to be a legendary hackathon and what we can learn from it:
Last weekend we had the pleasure of sponsoring University Hacker Olympics. Unlike your typical hackathons, this one emphasized connecting University students with industry professionals.
Personally, I thought the event was innovative in the field of recruiting. In the traditional interview process, sometimes great candidates were dismissed because their shyness or nervousness inhibited them from performing. 1-1 interviews can be intimidating, we’ve all been there. From the interviewer’s perspective, asking candidates to solve problems does not provide any valuable insight into how pleasant it would be to work with them in a working environment.
Developing an iOS App itself is a huge undertaking: you want your product to be beautiful, interactive, and functional. That’s why Parse makes so much sense, it helps you avoid writing a backend server to power your App by giving you a data store and providing the most basic web services. These days many web services are incredibly powerful and help developers do really amazing things, like OpenTok, but they are targeted at having a backend. That’s where Parse Cloud Code comes in: it gives developers the ability to leverage the best of a back-end server in the path of least resistance.
Nearly 7 months ago, we publicly announced that the OpenTok API would extend its reach to native mobile application developers by publishing the OpenTok iOS SDK. In the time since, we have tightened the performance of the SDK runtime for iOS devices and spent a good deal of time learning about how best to deliver video to the mobile platform. While iOS commands a large portion of the mobile app market, it is intuitive that we should build similar SDKs for other popular platforms outside of the browser. It is a pleasure to announce that we are developing the OpenTok Android SDK, to allow native Android developers to bring live video chat to their apps.
A new standard is making its way into web browsers and other clients around the world over the next few months that will likely change the way that we communicate with each other. WebRTC (Real-Time Communication) is a set of protocols and technologies that have been proposed to allow modern web browsers (currently Chrome 23 has support) to embed live audio/video communications without a plugin like flash.
Over the last few months we’ve been hard at work on a new variant to our iOS Video SDK, which we’re dubbing the OpenTok WebRTC for iOS SDK.
In the world of video WebRTC is a really big deal. The quality increase we’ve seen in WebRTC video versus our current Flash SDK is pretty phenomenal. For instance, video latency is typically less than 250ms under most network conditions. This is important to maintain a flowing conversation and avoid talking over other people on the call. Video quality is also noticeably better. The framerate and resolution are higher and adjusted dynamically over time to take advantage of the bandwidth and device capabilities that’s available between the clients.
The Staging environment of the OpenTok platform will no longer exist as of Wednesday, September 12. We are excited about bringing the quality, performance, and scale of our Production environment to all partners from their very first experience with the OpenTok platform.
Update: March 13, 2014 – Please note that this blog post references the archiving functionality in our OpenTok 1.0 platform. This feature is no longer being supported. Learn more about archiving using our OpenTok 2.0 platform.
A few weeks ago, Filepicker.io added new features that allowed users to record video directly from their webcam into their cloud storage using OpenTok’s standalone recorder. What a cool integration! I can now leave video messages for myself everyday.
That very weekend, I attended the box hackathon and met the very cool guys from Filepicker. After speaking with them, I realized that OpenTok’s archiving capabilities integrates snugly with their api, especially with the recent release of our stitching API. And just like that, OpenTok Picker is born.
Back in March of this year, TokBox launched a new SDK for its video platform that took the power of live, face-to-face conversations and brought them to the iOS platform (think FaceTime but as an API). This SDK has been essential to our ecosystem as it has helped our partners to create new iOS applications as well as bring new value into existing applications by adding live video. We’ve seen some fantastic use-cases take shape over the last few months. Some, perhaps obvious and others that are pushing the limits of new video use-cases.