You can use pre-built transformers in the Vonage Media Processor library or create your own custom audio or video transformer to apply to published video.
You can use the otc_publisher_set_video_transformers()
and
otc_publisher_set_audio_transformers()
functions to apply audio and video transformers to a published stream.
Important: Currently, only Apple silicon Macs are supported. See System requirements for more information.
The Vonage Video macOS SDK includes two ways to implement transformers:
Moderate — For video, you can apply the background blur and background replacement video transformers included in the Vonage Media Library. See Applying a video transformer from the Vonage Media Library. For audio, you can apply the noise suppression audio transformer included in the Vonage Media Library. See Applying an audio transformer from the Vonage Media Library.
Advanced — You can create your own custom video transformers and custom audio transformers.
Currently, Vonage Media Library transformers are only supported on Apple silicon Macs.
Transformers require adequate processor support. Even on supported systems, transformers may not be stable when background processes limit available processing resources. The same limitations may apply with custom media transformers in addition to transformers from the Vonage Media Library.
macOS may throttle CPU performance to conserve energy (for example, to extend laptop battery life). This may result in suboptimal transformer performance and introduce unwanted audio or video artifacts. We recommend setting your system to not use low-power mode in such cases.
Many video transformations (such as background blur) use segmentation to separate the speaker from the background. For best results, use proper lighting and a plain background. Insufficient lighting or complex backgrounds may cause video artifacts (for example, the speaker or a hat the speaker is wearing may get blurred along with the background).
You should perform benchmark tests on as many supported devices as possible, regardless of the transformation.
Due to significant increased size when integrating Vonage Media Library into SDK, from OpenTok SDK v2.27.3 the Media Transformers are available via the opt-in Vonage Media Library. This library needs to explicitly be added to the project.
The Vonage Media Library was initially embedded in OpenTok SDK. If your OpenTok SDK version is older than 2.27.3, move directly to Applying a video transformer from the Vonage Media Library and Applying an audio transformer from the Vonage Media Library.
The Vonage Media Library is available as the Pod "VonageClientSDKVideoMacOSTransformers", for use with CocoaPods.
If a call to otc_video_transformer_create()
or otc_audio_transformer_create()
is made without loading the library, the transformer returned will be null. You should use errno, a global variable that is set by system calls in the event of an error to indicate what went wrong. It's defined in the "0x0A000006
.
Use the otc_video_transformer_create()
function to create a video transformer that uses a named transformer from the Vonage Media Library.
Two transformers are supported:
Background blur.
type
parameter to OTC_MEDIA_TRANSFORMER_TYPE_VONAGE
(defined in the SDK).
This indicates that you are using a transformer from the Vonage Media Library.name
parameter to "BackgroundBlur"
.properties
parameter to a JSON string defining properties for the transformer.
For the background blur transformer, the format of the JSON is "{"radius":"None"}".
Valid values for the radius
property are "None", "High", "Low", and "Custom".
If you set the radius
property to "Custom", add a custom_radius
property to the JSON
string: "{"radius":"Custom","custom_radius":"value"}" (where custom_radius
is a positive integer
defining the blur radius).callback
parameter to NULL
. (This parameter is used for custom video transformers.)userData
parameter to NULL
. (This parameter is used for custom video transformers.)otc_video_transformer *backgroundBlur = otc_video_transformer_create(
OTC_MEDIA_TRANSFORMER_TYPE_VONAGE,
"BackgroundBlur",
"{\"radius\":\"High\"}",
NULL,
NULL
);
Background replacement.
type
parameter to OTC_MEDIA_TRANSFORMER_TYPE_VONAGE
(defined in the SDK).
This indicates that you are using a transformer from the Vonage Media Library.name
parameter to "BackgroundReplacement"
.properties
parameter to a JSON string defining properties for the transformer.
For the background replacement transformer, the format of the JSON is "{"image_file_path":"path/to/image"}",
where image_file_path
is the absolute file path of a local image to use as virtual background.
Supported image formats are PNG and JPEG.callback
parameter to NULL
. (This parameter is used for custom video transformers.)userData
parameter to NULL
. (This parameter is used for custom video transformers.)otc_video_transformer *backgroundReplacement = otc_video_transformer_create(
OTC_MEDIA_TRANSFORMER_TYPE_VONAGE,
"BackgroundReplacement",
"{\"image_file_path\":\"path-to-image\"}",
NULL,
NULL
);
After you create the transformer, you can apply it to a publisher using the
otc_publisher_set_video_transformers()
function:
// Array of video transformers
otc_video_transformer *video_transformers[] = {
background_blur
};
otc_publisher_set_video_transformers(publisher, video_transformers, 1);
The last parameter of otc_publisher_set_video_transformers()
is the size of the transformers
array.
In this example we are applying one video transformer to the publisher. You can apply multiple transformers
by adding multiple otc_video_transformer objects to the transformers
array passed into
otc_publisher_set_video_transformers()
.
Note: This is a beta feature.
Use the otc_audio_transformer_create()
function to create an audio transformer that uses a named transformer from the Vonage Media Library.
One transformer is supported:
Noise Suppression.
type
parameter to OTC_MEDIA_TRANSFORMER_TYPE_VONAGE
(defined in the SDK).
This indicates that you are using a transformer from the Vonage Media Library.name
parameter to "NoiseSuppression"
.properties
parameter to a JSON string defining properties for the transformer.
For the noise suppression transformer, currently there are no properties, so the format of
the JSON is "".callback
parameter to NULL
. (This parameter is used for custom audio transformers.)userData
parameter to NULL
. (This parameter is used for custom audio transformers.)otc_audio_transformer *noise_suppression = otc_audio_transformer_create(
OTC_MEDIA_TRANSFORMER_TYPE_VONAGE,
"NoiseSuppression",
"{}",
NULL,
NULL
);
After you create the transformer, you can apply it to a publisher using the
otc_publisher_set_audio_transformers()
function:
// Array of audio transformers
otc_audio_transformer *audio_transformers[] = {
noise_suppression
};
otc_publisher_set_audio_transformers(publisher, audio_transformers, 1);
The last parameter of otc_publisher_set_audio_transformers()
is the size of the transformers
array.
In this example we are applying one audio transformer to the publisher. You can apply multiple transformers
by adding multiple otc_audio_transformer objects to the transformers
array passed into
otc_publisher_set_audio_transformers()
.
Use the otc_video_transformer_create()
function to create a video transformer.
type
parameter to OTC_MEDIA_TRANSFORMER_TYPE_CUSTOM
(defined in the SDK).
This indicates that you are creating a custom transformer.name
parameter to a unique name for your transformer.properties
parameter NULL
. (This parameter is used when using a transformer from
the Vonage Media Library.)callback
parameter to a callback function. This function is an instance of
the video_transform_callback
type, defined in the SDK. This function has two parameters:
user_data
-- see the next parameter -- and frame
-- an instance of type
of type otc_video_frame
(defined in the SDK) passed into the callback function when
there is video frame data available. Transform the video frame data in the callback function.userData
parameter (optional) to user data to be passed in the callback function.Here is a basic example:
void on_transform_black_white(void* user_data, struct otc_video_frame* frame)
{
// implement transformer on the otc_video_frame data
}
otc_video_transformer *black_white_border = otc_video_transformer_create(
OTC_MEDIA_TRANSFORMER_TYPE_CUSTOM,
"blacknwhite",
NULL,
on_transform_black_white,
NULL
);
After you create the transformer, you can apply it to a publisher using the
otc_publisher_set_video_transformers()
function:
// Array of video transformers
otc_video_transformer *video_transformers[] = {
black_white_border
};
otc_publisher_set_video_transformers(publisher, video_transformers, 1);
Use the otc_audio_transformer_create()
function to create an audio transformer.
type
parameter to OTC_MEDIA_TRANSFORMER_TYPE_CUSTOM
(defined in the SDK).
This indicates that you are creating a custom transformer. (In this version, no
predefined audio transformers from the Vonage Media Library are supported.)name
parameter to a unique name for your transformer.properties
parameter NULL
. (This parameter is used when using a transformer from
the Vonage Media Library.)callback
parameter to a callback function. This function is an instance of
the audio_transform_callback
type, defined in the SDK. This function has two parameters:
user_data
-- see the next parameter -- and frame
-- an instance of type
of type otc_audio_data
(defined in the SDK) passed into the callback function when
there is audio data available. Transform the audio data in the callback function.userData
parameter (optional) to user data to be passed in the callback function.Here is a basic example:
void on_transform_audio(void* user_data, struct otc_video_frame* frame)
{
// implement transformer on the otc_audio_data audio data
}
otc_audio_transformer *lowpass_filter = otc_video_transformer_create(
OTC_MEDIA_TRANSFORMER_TYPE_CUSTOM,
"lowpassFilter",
NULL,
on_transform_audio,
NULL
);
After you create the transformer, you can apply it to a publisher using the
otc_publisher_set_audio_transformers()
function:
// Array of audio transformers
otc_audio_transformer *audio_transformers[] = {
lowpass_filter
};
otc_publisher_set_audio_transformers(publisher, audio_transformers, 1);
The last parameter of otc_publisher_set_audio_transformers()
is the size of the transformers
array.
In this example we are applying one audio transformer to the publisher. You can apply multiple transformers
by adding multiple otc_audio_transformer
objects to the transformers
array passed into
otc_publisher_set_audio_transformers()
.
To clear video transformers for a publisher, pass an empty array into the
otc_publisher_set_video_transformers()
function.
otc_audio_transformer empty_array[] = {};
otc_publisher_set_video_transformers(publisher, empty_array, 0);
Use the
otc_video_transformer_delete()
function to delete an otc_audio_transformer instance:
otc_video_transformer_delete(lowpass_filter);
To clear audio transformers for a publisher, pass an empty array into the
otc_publisher_set_audio_transformers()
function.
otc_video_transformer empty_array[] = {};
otc_publisher_set_audio_transformers(publisher, empty_array, 0);
Use the
otc_audio_transformer_delete()
function to delete an otc_audio_transformer instance:
otc_audio_transformer_delete(black_white_border);
See this sample at the opentok-macos-sdk-samples repo on GitHub.