close search

Back to Screen Sharing Overview

Screen Sharing Step 1: Adding screen sharing

To see the code for this sample, switch to the screensharing branch of the learning-opentok-android repo:

git checkout screensharing

This page shows the difference between this branch and the basics.step-6 branch, which this branch builds from.

This branch shows you how to capture the screen (an Android View) using a custom video capturer.

Before studying this sample, see the basic-capturer-step.1 sample.

This sample code demonstrates how to use the OpenTok Android SDK to publish a screen-sharing video, using the device screen as the source for the stream's video.

The ChatActivity class uses WebView object as the source for the screen-sharing video of the published stream.

In the initializePublisher() method of the ChatActivity class, after creating a Publisher object the code calls the setCapturer(capturer) method of the Publisher object, passing in a ScreensharingCapturer object as a parameter:

mPublisher.setCapturer(new ScreensharingCapturer(mScreensharedView));

ScreensharingCapturer is a custom class that extends the BaseVideoCapturer class (defined in the OpenTok iOS SDK). This class lets you define a custom video capturer to be used by an OpenTok publisher. The constructor of the ScreensharingCapturer class is passed an Android View object, which the capturer will use as the source for the video:

public ScreensharingCapturer(View view) {
    mContentView = view;
    mFrameProducerHandler = new Handler();

The constructor also creates a new Handler object to process the mFrameProducer Runnable object.

The initCapture method is used to initialize the capture and sets value for the pixel format of an OTVideoFrame object. In this example, it is set to RGB.

The BaseVideoCapturer.init() method initializes capture settings to be used by the custom video capturer. In this sample's custom implementation of BaseVideoCapturer (ScreensharingCapturer) the initCapture() it also sets some settings for the video capturer:

public void init() {
    mCapturerHasStarted = false;
    mCapturerIsPaused = false;

    mCapturerSettings = new CaptureSettings();
    mCapturerSettings.fps = FPS;
    mCapturerSettings.width = mWidth;
    mCapturerSettings.height = mHeight;
    mCapturerSettings.format = BaseVideoCapturer.ARGB;

The startCapture() method starts the mFrameProducer thread after 1/15 second:

public int startCapture() {
    mCapturerHasStarted = true;
    mFrameProducerHandler.postDelayed(mFrameProducer, mFrameProducerIntervalMillis);
    return 0;

The mFrameProducerthread gets a Bitmap representation of themContentViewobject (the WebView), writes its pixels to a buffer, and then calls theprovideIntArrayFrame()` method, passing in that buffer as a parameter:

private Runnable mFrameProducer = new Runnable() {
    public void run() {
        int width = mContentView.getWidth();
        int height = mContentView.getHeight();

        if (frameBuffer == null || mWidth != width || mHeight != height) {
            mWidth = width;
            mHeight = height;
            frameBuffer = new int[mWidth * mHeight];

        Bitmap bmp = mContentView.getDrawingCache();
        if (bmp != null) {
            bmp.getPixels(frameBuffer, 0, width, 0, 0, width, height);
            provideIntArrayFrame(frameBuffer, ARGB, width, height, 0, false);

        if (mCapturerHasStarted && !mCapturerIsPaused) {
            mFrameProducerHandler.postDelayed(mFrameProducer, mFrameProducerIntervalMillis);

The provideIntArrayFrame() method, defined by the BaseVideoCapturer class (which the CameraVideoCapturer class extends) sends an integer array of data to the publisher, to be used for the next video frame published.

If the publisher is still capturing video, the thread starts again after another 1/15 of a second, so that the capturer continues to supply the publisher with new video frames to publish.