Release notes
This page provides the release notes for the Interactive Live Streaming 4.x.
If your target platform is Android 12 or higher, add the android.permission.BLUETOOTH_CONNECT
permission to the AndroidManifest.xml
file of the Android project to enable the Bluetooth function of the Android system:
v4.0.1
v4.0.1 was released on September 29, 2022.
Compatibility changes
This release deletes the sourceType
parameter in enableDualStreamMode
[3/3] and enableDualStreamModeEx
, and the enableDualStreamMode
[2/3] method, because the SDK supports enabling dual-stream mode for various video sources captured by custom capture or SDK, you don't need to specify the video source type any more.
New features
1. In-ear monitoring
This release adds getEarMonitoringAudioParams
callback to set the audio data format of the in-ear monitoring. You can use your own audio effect processing module to pre-process the audio frame data of the in-ear monitoring to implement custom audio effects. After calling registerAudioFrameObserver
to register the audio observer, set the audio data format in the return value of the getEarMonitoringAudioParams
callback. The SDK calculates the sampling interval based on the return value of the callback, and triggers the onEarMonitoringAudioFrame
callback based on the sampling interval.
2. Audio capture device test
This release adds support for testing local audio capture devices before joining channel. You can call startRecordingDeviceTest
to start the audio capture device test. After the test is complete, call the stopPlaybackDeviceTest
method to stop the audio capture device test.
3. Local network connection types
To make it easier for users to know the connection type of the local network at any stage, this release adds the getNetworkType
method. You can use this method to get the type of network connection in use, including UNKNOWN, DISCONNECTED, LAN, WIFI, 2G, 3G, 4G, 5G. When the local network connection type changes, the SDK triggers the onNetworkTypeChanged
callback to report the current network connection type.
4. Audio stream filter
This release introduces filtering audio streams based on volume. Once this function is enabled, the Agora server ranks all audio streams by volume and transports 3 audio streams with the highest volumes to the receivers by default. The number of audio streams to be transported can be adjusted; you can contact support@agora.io to adjust this number according to your scenarios.
Meanwhile, Agora supports publishers to choose whether or not the audio streams being published are to be filtered based on volume. Streams that are not filtered will bypass this filter mechanism and transported directly to the receivers. In scenarios where there are a number of publishers, enabling this function helps reducing the bandwidth and device system pressure for the receivers.
To enable this function, contact support@agora.io.
5. Dual-stream mode
This release optimizes the dual-stream mode, you can call enableDualStreamMode
and enableDualStreamModeEx
before and after joining a channel.
The implementation of subscribing low-quality video stream is expanded. The SDK enables the low-quality video stream auto mode on the sender by default (the SDK does not send low-quality video streams), you can follow these steps to enable sending low-quality video streams:
- The host at the receiving end calls
setRemoteVideoStreamType
orsetRemoteDefaultVideoStreamType
to initiate a low-quality video stream request. - After receiving the application, the sender automatically switches to sending low-quality video stream mode.
If you want to modify the default behavior above, you can call setDualStreamMode
[1/2] or setDualStreamMode
[2/2] and set the mode
parameter to DISABLE_SIMULCAST_STREAM
(always do not send low-quality video streams) or ENABLE_SIMULCAST_STREAM
(always send low-quality video streams).
6. Spatial audio effect
This release adds the following features applicable to spatial audio effect scenarios, which can effectively enhance the user's sense of presence experience in virtual interactive scenarios.
- Sound insulation area: You can set a sound insulation area and sound attenuation parameter by calling
setZones
. When the sound source (which can be a user or the media player) and the listener belong to the inside and outside of the sound insulation area, the listner experiences an attenuation effect similar to that of the sound in the real environment when it encounters a building partition. You can also set the sound attenuation parameter for the media player and the user, respectively, by callingsetPlayerAttenuation
andsetRemoteAudioAttenuation
, and specify whether to use that setting to force an override of the sound attenuation paramter insetZones
. - Doppler sound: You can enable Doppler sound by setting the
enable_doppler
parameter inSpatialAudioParams
, and the receiver experiences noticeable tonal changes in the event of a high-speed relative displacement between the source source and receiver (such as in a racing game scenario). - Headphone equalizer: You can use a preset headphone equalization effect by calling the
setHeadphoneEQPreset
method to improve the hearing of the headphones.
Improvements
1. Video information change callback
This release optimizes the trigger logic of onVideoSizeChanged
, which can also be triggered and report the local video size change when startPreview
is called separately.
Issues fixed
This release fixed the following issues.
- When calling
setVideoEncoderConfigurationEx
in the channel to increase the resolution of the video, it occasionally failed. - In online meeting scenarios, the local user and the remote user might not hear each other after the local user is interrupted by a call.
- After calling
setCloudProxy
to set the cloud proxy, callingjoinChannelEx
to join multiple channels failed. - When using the Agora media player to play videos, after you play and pause the video, and then call the seek method to specify a new position for playback, the video image might remain unchanged; if you call the resume method to resume playback, the video might be played in a speed faster than the original one.
API changes
Added
getEarMonitoringAudioParams
startRecordingDeviceTest
stopRecordingDeviceTest
getNetworkType
isAudioFilterable
in theChannelMediaOptions
setDualStreamMode
[1/2]setDualStreamMode
[2/2]setDualStreamModeEx
SIMULCAST_STREAM_MODE
setZones
setPlayerAttenuation
setRemoteAudioAttenuation
muteRemoteAudioStream
SpatialAudioParams
setHeadphoneEQPreset
HEADPHONE_EQUALIZER_PRESET
Modified
-
enableDualStreamMode
[1/3] -
enableDualStreamMode
[3/3] -
enableDualStreamModeEx
Deprecated
startEchoTest
[2/3]
Deleted
enableDualStreamMode
[2/3]
v4.0.0
v4.0.0 was released on September 15, 2022.
Compatibility changes
1. Integration change
This release has optimized the implementation of some features, resulting in incompatibility with v3.7.0. The following are the main features with compatibility changes:
- Multiple channel
- Media stream publishing control
- Custom video capture and rendering (Media IO)
- Warning codes
After upgrading the SDK, you need to update the code in your app according to your business scenarios. For details, see Migrate from v3.7.0 to v4.0.0.
2. Callback exception handling
To facilitate troubleshooting, as of this release, the SDK no longer catches exceptions that are thrown by your own code implementation when triggering callbacks in the IRtcEngineEventHandler
class. You need to catch and handle the exceptions yourself; otherwise, it can cause a crash.
New features
1. Multiple media tracks
This release supports one RtcEngine
instance to collect multiple audio and video sources at the same time and publish them to the remote users by setting RtcEngineEx
and ChannelMediaOptions.
After calling joinChannel
to join the first channel, call joinChannelEx
multiple times to join multiple channels, and publish the specified stream to different channels through different user ID (localUid
) and ChannelMediaOptions
settings.
Besides, this release adds createCustomVideoTrack
method to implement video custom capture. You can refer to the following steps to publish multiple custom captured videos in the channel:
- Create a custom video track: Call this method to create a video track, and get the video track ID.
- Set the custom video track to be published in the channel: In each channel's
ChannelMediaOptions
, set thecustomVideoTrackId
parameter to the ID of the video track you want to publish, and setpublishCustomVideoTrack
totrue
. - Pushing an external video source: Call
pushVideoFrame
, and specifyvideoTrackId
as the ID of the custom video track in step 2 in order to publish the corresponding custom video source in multiple channels.
You can also experience the following features with the multi-channel capability:
- Publish multiple sets of audio and video streams to the remote users through different user IDs (
uid
). - Mix multiple audio streams and publish to the remote users through a user ID (
uid
). - Combine multiple video streams and publish them to the remote users through a user ID (
uid
).
2. Ultra HD resolution (Beta)
In order to improve the interactive video experience, the SDK optimizes the whole process of video capture, encoding, decoding and rendering, and now supports 4K resolution. The improved FEC (Forward Error Correction) algorithm enables adaptive switches according to the frame rate and number of video frame packets, which further reduces the video stuttering rate in 4K scenes.
Additionally, you can set the encoding resolution to 4K (3840 × 2160) and the frame rate to 60 fps when calling setVideoEncoderConfiguration
. The SDK supports automatic fallback to the appropriate resolution and frame rate if your device does not support 4K.
This feature has certain requirements with regards to device performance and network bandwidth, and the supported upstream and downstream frame rates vary on different platforms. To enable this feature, contact support@agora.io.
3. Agora media player
To make it easier for users to integrate the Agora SDK and reduce the SDK's package size, this release introduces the Agora media player. After calling the createMediaPlayer
method to create a media player object, you can then call the methods in the IMediaPlayer
class to experience a series of functions, such as playing local and online media files, preloading a media file, changing the CDN route for playing according to your network conditions, or sharing the audio and video streams being played with remote users.
4. Brand-new AI noise reduction
The SDK supports a new version of AI noise reduction (in comparison to the basic AI noise reduction in v3.7.0). The new AI noise reduction has better vocal fidelity, cleaner noise suppression, and adds a dereverberation option. To enable this feature, contact sales-us@agora.io.
5. Ultra-high audio quality
To make the audio clearer and restore more details, this release adds the ULTRA_HIGH_QUALITY_VOICE
enumeration. In scenarios that mainly feature the human voice, such as chat or singing, you can call setVoiceBeautifierPreset
and use this enumeration to experience ultra-high audio quality.
6. Spatial audio
This feature is in experimental status. To enable this feature, contact sales-us@agora.io. Contact technical support if needed.
You can set the spatial audio for the remote user as following:
- Local Cartesian Coordinate System Calculation: This solution uses the
ILocalSpatialAudioEngine
class to implement spatial audio by calculating the spatial coordinates of the remote user. You need to callupdateSelfPosition
andupdateRemotePosition
to update the spatial coordinates of the local and remote users, respectively, so that the local user can hear the spatial audio effect of the remote user.
You can also set the spatial audio for the media player as following:
- Local Cartesian Coordinate System Calculation: This solution uses the
ILocalSpatialAudioEngine
class to implement spatial audio. You need to callupdateSelfPosition
andupdatePlayerPositionInfo
to update the spatial coordinates of the local user and media player, respectively, so that the local user can hear the spatial audio effect of media player.
7. Real-time chorus
This release gives real-time chorus the following abilities:
- Two or more choruses are supported.
- Each singer is independent of each other. If one singer fails or quits the chorus, the other singers can continue to sing.
- Very low latency experience. Each singer can hear each other in real time, and the audience can also hear each singer in real time.
This release adds the AUDIO_SCENARIO_CHORUS
enumeration. With this enumeration, users can experience ultra-low latency in real-time chorus when the network conditions are good.
8. Extensions from the Agora extensions marketplace
In order to enhance the real-time audio and video interactive activities based on the Agora SDK, this release supports the one-stop solution for the extensions from the Agora extensions marketplace:
- Easy to integrate: The integration of modular functions can be achieved simply by calling an API, and the integration efficiency is improved by nearly 95%.
- Extensibility design: The modular and extensible SDK design style endows the Agora SDK with good extensibility, which enables developers to quickly build real-time interactive apps based on the Agora extensions marketplace ecosystem.
- Build an ecosystem: A community of real-time audio and video apps has developed that can accommodate a wide range of developers, offering a variety of extension combinations. After integrating the extensions, developers can build richer real-time interactive functions. For details, see Use an Extension.
- Become a vendor: Vendors can integrate their products with Agora SDK in the form of extensions, display and publish them in the Agora extensions marketplace, and build a real-time interactive ecosystem for developers together with Agora. For details on how to develop and publish extensions, see Become a Vendor.
9. Enhanced channel management
To meet the channel management requirements of various business scenarios, this release adds the following functions to the ChannelMediaOptions
structure:
- Sets or switches the publishing of multiple audio and video sources.
- Sets or switches channel profile and user role.
- Sets or switches the stream type of the subscribed video.
- Controls audio publishing delay.
Set ChannelMediaOptions
when calling joinChannel
or joinChannelEx
to specify the publishing and subscription behavior of a media stream, for example, whether to publish video streams captured by cameras or screen sharing, and whether to subscribe to the audio and video streams of remote users. After joining the channel, call updateChannelMediaOptions
to update the settings in ChannelMediaOptions
at any time, for example, to switch the published audio and video sources.
10. Screen sharing
This release optimizes the screen sharing function. You can enable this function in the following ways.
- Call the
startScreenCapture
method before joining a channel, and then calljoinChannel
[2/2] to join a channel and setpublishScreenCaptureVideo
astrue
. - Call the
startScreenCapture
method after joining a channel, and then callupdateChannelMediaOptions
to setpublishScreenCaptureVideo
astrue
.
11. Subscription allowlists and blocklists
This release introduces subscription allowlists and blocklists for remote audio and video streams. You can add a user ID that you want to subscribe to in your allowlist, or add a user ID for the streams you do not wish to see to your blocklists. You can experience this feature through the following APIs, and in scenarios that involve multiple channels, you can call the following methods in the RtcEngineEx
interface:
setSubscribeAudioBlacklist
:Set the audio subscription blocklist.setSubscribeAudioWhitelist
:Set the audio subscription allowlist.setSubscribeVideoBlacklist
:Set the video subscription blocklist.setSubscribeVideoWhitelist
:Set the video subscription allowlist.
If a user is added in a blocklist and a allowlist at the same time, only the blocklist takes effect.
12. Set audio scenarios
To make it easier to change audio scenarios, this release adds the setAudioScenario
method. For example, if you want to change the audio scenario from AUDIO_SCENARIO_DEFAULT
to AUDIO_SCENARIO_GAME_STREAMING
when you are in a channel, you can call this method.
Improvements
1. Fast channel switching
This release can achieve the same switching speed as switchChannel
in v3.7.0 through the leaveChannel
and joinChannel
methods so that you don't need to take the time to call the switchChannel
method.
2. Push external video frames
This releases supports pushing video frames in I422 format. You can call the pushExternalVideoFrame
[1/2] method to push such video frames to the SDK.
3. Voice pitch of the local user
This release adds voicePitch
in AudioVolumeInfo
of onAudioVolumeIndication
. You can use voicePitch
to get the local user's voice pitch and perform business functions such as rating for singing.
4. Device permission management
This release adds the onPermissionError
method, which is automatically reported when the audio capture device or camera does not obtain the appropriate permission. You can enable the corresponding device permission according to the prompt of the callback.
5. Video preview
This release improves the implementation logic of startPreview
. You can call the startPreview
method to enable video preview at any time.
6. Video types of subscription
You can call the setRemoteDefaultVideoStreamType
method to choose the video stream type when subscribing to streams.