Suggestions

close search

Back to Custom Audio Driver Overview

Custom Audio Driver Step 2: Adding a custom audio renderer

  1. 1
    Custom Audio Driver Step 1:
    Implement custom audio driver
  2. 2
    Custom Audio Driver Step 2:
    Implement custom audio renderer

To see the code for this sample, switch to the audio-driver.step-2 branch of the learning-opentok-android repo:

git checkout audio-driver.step-2

This page shows the difference between this branch and the audio-driver.step-1 branch, which this branch builds from.

This branch shows you how to implement simple audio renderer for subscribed streams' audio.

The BasicAudioDevice() constructor method sets up a file to save the incoming audio to a file. This is done simply to illustrate a use of the custom audio driver's audio renderer. The app requires the following permissions, defined in the AndroidManifest.xml file:

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

The BaseAudioDevice initRenderer() method is called when the app initializes the audio renderer. The BasicAudioDevice implementation of this method instantiates a new File object, to which the the app will write audio data:

 @Override
 public boolean initRenderer() {
    mRendererBuffer = ByteBuffer.allocateDirect(SAMPLING_RATE * 2); // Each sample has 2 bytes
    mRendererFile =
      new File(Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOCUMENTS)
               , "output.raw");
    if (!mRendererFile.exists()) {
        try {
            mRendererFile.getParentFile().mkdirs();
            mRendererFile.createNewFile();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
    return true;
}

The BaseAudioDevice.startRendering() method is called when the audio device should start rendering (playing back) audio from subscribed streams. The BasicAudioDevice implementation of this method starts the mCapturer thread to be run in the queue after 1 second:

 @Override
 public boolean startRenderer() {
     mRendererStarted = true;
     mRendererHandler.postDelayed(mRenderer, mRendererIntervalMillis);
     return true;
 }

The mRenderer thread gets 1 second worth of audio from the audio bus by calling the readRenderData(buffer, numberOfSamples) method of the AudioBus object. It then writes the audio data to the file (for sample purposes). And, if the audio device is still being used to render audio samples, it sets a timer to run the mRendererHandler thread again after 0.1 seconds:

private Handler mRendererHandler;
private Runnable mRenderer = new Runnable() {
    @Override
    public void run() {
        mRendererBuffer.clear();
        getAudioBus().readRenderData(mRendererBuffer, SAMPLING_RATE);
        try {
            FileOutputStream stream = new FileOutputStream(mRendererFile);
            stream.write(mRendererBuffer.array());
            stream.close();
        } catch (FileNotFoundException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }

        if (mRendererStarted && !mAudioDriverPaused) {
            mRendererHandler.postDelayed(mRenderer, mRendererIntervalMillis);
        }

    }
};

This example is intentionally simple for instructional purposes -- it simply writes the audio data to a file. In a more practical use of a custom audio driver, you could use the custom audio driver to play back audio to a Bluetooth device or to process audio before playing it back.

  1. 1
    Custom Audio Driver Step 1:
    Implement custom audio driver
  2. 2
    Custom Audio Driver Step 2:
    Implement custom audio renderer