Suggestions

close search

Add Messaging, Voice, and Authentication to your apps with Vonage Communications APIs

Visit the Vonage API Developer Portal

Back to Tutorials

Screen Sharing Tutorial (iOS)

Overview

This tutorial demonstrates how to use the OpenTok iOS SDK to publish a screen-sharing video, using the device screen as the source for the stream's video.

Setting up your project

The code for this section is in the screen-sharing branch of the learning-opentok-ios repo, so if you haven't already, you'll need to clone the repo into a local directory — this can be done using the command line:

git clone https://github.com/opentok/learning-opentok-ios.git

Then check out the branch:

git checkout screen-sharing

This branch shows you how to capture the screen (a UIView) using a custom video capturer. Open the project in XCode to follow along.

Exploring the code

This sample uses the initCapture, releaseCapture, startCapture, stopCapture, and isCaptureStarted methods of the OTVideoKit class to manage capture functions of the application. The ViewController class creates a session, instantiates subscribers and sets up the publisher. The OTKBasicVideoCapturer class creates a frame, captures a screenshot, tags the frame with a timestamp and saves it in an instance of consumer. The publisher accesses the consumer to obtain the frame.

The initCapture method is used to initialize the capture and sets value for the pixel format of an OTVideoFrame object. In this example, it is set to ARGB.

- (void)initCapture
{
    self.format = [[OTVideoFormat alloc] init];
    self.format.pixelFormat = OTPixelFormatARGB;
}

The releaseCapture method clears the memory buffer:

- (void)releaseCapture
{
    self.format = nil;
}

The startCapture method creates a separate thread and calls the produceFrame method to start screen captures:

- (int32_t)startCapture
{
    self.captureStarted = YES;
    dispatch_after(kTimerInterval,
                   dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0),
                   ^{
                       @autoreleasepool {
                           [self produceFrame];
                       }
                   });

    return 0;
}

The produceFrame method:

The frame for the captured images is set as an object of OTVideoFrame. Properties of OTVideoFrame define the planes, timestamp, orientation and format of a frame.

OTVideoFrame *frame = [[OTVideoFrame alloc] initWithFormat:self.format];

A timestamp is created to tag the image. Every image is tagged with a timestamp so both publisher and subscriber are able to create the same timeline and reference the frames in the same order.

static mach_timebase_info_data_t time_info;
uint64_t time_stamp = 0;

time_stamp = mach_absolute_time();
time_stamp *= time_info.numer;
time_stamp /= time_info.denom;

The screenshot method is called to obtain an image of the screen.

CGImageRef screenshot = [[self screenshot] CGImage];

The fillPixelBufferFromCGImage method converts the image data of a CGImage into a CVPixelBuffer.

[self fillPixelBufferFromCGImage:screenshot];

The frame is tagged with a timestamp and capture rate in frames per second and delay between captures are set.

CMTime time = CMTimeMake(time_stamp, 1000);
frame.timestamp = time;
frame.format.estimatedFramesPerSecond = kFramesPerSecond;
frame.format.estimatedCaptureDelay = 100;

The number of bytes in a single row is multiplied with the height of the image to obtain the size of the image. Note, the single element array and bytes per row are based on a 4-byte, single plane specification of an RGB image.

frame.format.imageWidth = CVPixelBufferGetWidth(pixelBuffer);
frame.format.imageHeight = CVPixelBufferGetHeight(pixelBuffer);
frame.format.bytesPerRow = [@[@(frame.format.imageWidth * 4)] mutableCopy];
frame.orientation = OTVideoOrientationUp;

CVPixelBufferLockBaseAddress(pixelBuffer, 0);
uint8_t *planes[1];

planes[0] = CVPixelBufferGetBaseAddress(pixelBuffer);
[frame setPlanesWithPointers:planes numPlanes:1];

The frame is saved in an instance of consumer. The publisher accesses captured images through the consumer instance.

[self.consumer consumeFrame:frame];

The pixel buffer is cleared and a background-priority queue (separate from the queue used by the UI) is used to capture images. If image capture is in progress, the produceFrame method calls itself 15 times per second.

 CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
 if (self.captureStarted) {
        dispatch_after(kTimerInterval,
                       dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0),
                       ^{
                           @autoreleasepool {
                               [self produceFrame];
                           }
                       });
    }

The screenshot method takes a screenshot and returns an image. This method is called by the produceFrame method.

- (UIImage *)screenshot
{
    CGSize imageSize = CGSizeZero;

    imageSize = [UIScreen mainScreen].bounds.size;

    UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
    UIWindow *window = [UIApplication sharedApplication].keyWindow;

    if ([window respondsToSelector:
         @selector(drawViewHierarchyInRect:afterScreenUpdates:)])
    {
        [window drawViewHierarchyInRect:window.bounds afterScreenUpdates:NO];
    }
    else {
        [window.layer renderInContext:UIGraphicsGetCurrentContext()];
    }

    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return image;
}

Congratulations! You've finished the Screen sharing Tutorial for iOS.
You can continue to play with and adjust the code you've developed here, or check out the Next Steps below. For more information on screen sharing with OpenTok, see the OpenTok Screen sharing developer guide for iOS.