Suggestions

close search

Back to Custom Audio Driver Overview

Custom Audio Driver Step 1: Adding a custom audio driver

  1. 1
    Custom Audio Driver Step 1:
    Implement custom audio driver
  2. 2
    Custom Audio Driver Step 2:
    Implement custom audio renderer

To see the code for this sample, switch to the audio-driver.step-1 branch of the learning-opentok-android repo:

git checkout audio-driver.step-1

This page shows the difference between this branch and the basics.step-6 branch, which this branch builds from.

This branch shows you how to implement a custom audio driver and use a simple audio capturer for audio used by the stream published by the app.

The OpenTok Android SDK lets you set up a custom audio driver for publishers and subscribers. You can use a custom audio driver to customize the audio sent to a publisher's stream. You can also customize the playback of subscribed streams' audio.

This sample application uses the custom audio driver to publish white noise (a random audio signal) to its audio stream. It also uses the custom audio driver to capture the audio from subscribed streams and save it to a file.

Setting up the audio device and the audio bus

In using a custom audio driver, you define a custom audio driver and an audio bus to be used by the app.

The BasicAudioDevice class defines a basic audio device interface to be used by the app. It extends the BaseAudioDevice class, defined by the OpenTok Android SDK. To use a custom audio driver, call the AudioDeviceManager.setAudioDevice(device) method. This sample sets the audio device to an instance of the BasicAudioDevice class:

AudioDeviceManager.setAudioDevice(new BasicAudioDevice(this));

Use the AudioSettings class, defined in the OpenTok Android SDK, to define the audio format used by the custom audio driver. The BasicAudioDevice() constructor instantiates two AudioSettings instances -- one for the custom audio capturer and one for the custom audio renderer. It sets the sample rate and number of channels for each:

public BasicAudioDevice(Context context) {
    mContext = context;

    mCaptureSettings = new AudioSettings(SAMPLING_RATE, NUM_CHANNELS_CAPTURING);
    mRendererSettings = new AudioSettings(SAMPLING_RATE, NUM_CHANNELS_RENDERING);

    mCapturerStarted = false;
    mRendererStarted = false;

    mAudioDriverPaused = false;

    mCapturerHandler = new Handler();
}

The constructor also sets up some local properties that report whether the device is capturing or rendering. It also sets a Handler instance to process the mCapturer Runnable object.

The BasicAudioDevice getAudioBus() method gets the AudioBus instance that this audio device uses, defined by the BasicAudioDevice.AudioBus class. Use the AudioBus instance to send and receive audio samples to and from a session. The publisher will access the AudioBus object to obtain the audio samples. And subscribers will send audio samples (from subscribed streams) to the AudioBus object.

Capturing audio to be used by a publisher

The BaseAudioDevice startCapturer() method is called when the audio device should start capturing audio to be published. The BasicAudioDevice implementation of this method starts the mCapturer thread to be run in the queue after 1 second:

public boolean startCapturer() {
    mCapturerStarted = true;
    mCapturerHandler.postDelayed(mCapturer, mCapturerIntervalMillis);
    return true;
}

The mCapturer thread produces a buffer containing samples of random data (white noise). It then calls the writeCaptureData(data, numberOfSamples) method of the AudioBus object, which sends the samples to the audio bus. The publisher in the application uses the samples sent to the audio bus to transmit as audio in the published stream. Then if a capture is still in progress (if the app is publishing), the mCapturer thread is run again after another second:

private Runnable mCapturer = new Runnable() {
    @Override
    public void run() {
        mCapturerBuffer.rewind();

        Random rand = new Random();
        rand.nextBytes(mCapturerBuffer.array());

        getAudioBus().writeCaptureData(mCapturerBuffer, SAMPLING_RATE);

        if(mCapturerStarted && !mAudioDriverPaused) {
            mCapturerHandler.postDelayed(mCapturer, mCapturerIntervalMillis);
        }
    }
};

See the next step, audio-driver.step-2, to see a simple implementation of a custom audio renderer.

Other notes on the audio driver API

The AudioDevice class includes other methods that are implemented by the BasicAudioDevice class. However, this sample does not do anything interesting in these methods, so they are not included in this discussion.

  1. 1
    Custom Audio Driver Step 1:
    Implement custom audio driver
  2. 2
    Custom Audio Driver Step 2:
    Implement custom audio renderer