Several partners have been asking us about the options around getting access to media streams as they come and go from an iOS device. While more robust media access features are further off, I wanted to take some time to explore the options an iOS developer can play with today.
The UIKit view hierarchy integrates with a fairly simple animation and compositing API. Every instance of UIView is backed by an animation layer (CALayer), which can be accessed (and manipulated) without much complexity. A neat thing about CALayer is that you render its contents at any time using the renderInContext: method. Most often, your render target is the window, which is managed by the UIKit view hierarchy, so none of this knowledge is particularly compelling. Unless of course, you wanted to render the contents of the animation layer to a bitmap in memory to perform, say, facial recognition with the iOS 5 CIDetector.
One year ago, I went and built something using the OpenTok API. I fell in love with Tokbox and I work here now. This weekend I will go as a sponsor, and will be giving out a prize for the best use of OpenTok.
Out of sheer excitement, I have compiled a list of all the sponsor APIs that you can use to take your apps to the next level. Go through it, and see what ideas you can come up with to bag a few API prizes!
Today we officially announce our winners! Out of all the super awesome submissions, 3 teams will be receiving an iPad 3. Without further ado, here are the winners:
1) An iPad3 goes to Romotive! Romo is a robot that you attach iphone/ipod into, and you will be able to control it from the browser! You can also broadcast your video onto the iphone and see what Romo sees on your browser. Welcome to the new age of telepresence! These guys are serious, you can order their robots Today. Read their blog!
The year’s biggest hackathon is going to kick off in T-24 hours. Do you think we would miss it? Not a chance! We’re looking forward to sponsoring TechCrunch Disrupt’s Hackathon and Conference (this time in NYC) for the third time. Perhaps we’ll see an OpenTok powered app take home the title. Third time’s the charm, right? RIGHT?
Since the last Disrupt Hackathon, more eyeballs have been on the video chat world than ever before. How so? For starters, Google+ Hangouts is gaining traction and pushing new features (hello “On Air”), folks are anxiously awaiting the launch of AirTime (what is it already?!?), and we’ve launched the first ever iOS SDK for video chat. Not to shabby video chat industry.
As a developer, there are many things you can do with an image: filters, face detection, object recognition, and more. Last week, Covify, an app that uses image recognition to scan music albums and add them to Spotify, won the Next Web Hackathon in Amsterdam.
Covify takes advantage of a lesser known feature of OpenTok, the getImgData() API, which captures a base64 representation of the image on your webcam. Covify used this call to grab the image from the webcam, then send it to their servers to scan it and identify which album it is, then return to the user a link to add the album to Spotify.