Android端WebRTC本地音视频采集流程源码分析

WebRTC源码版本为:org.webrtc:google-webrtc:1.0.32006
本文仅分析Java层源码,在分析之前,先说明一下一些重要类的基本概念。

  • MediaSource:WebRTC媒体资源数据来源,它有两个子类:AudioSource(音频资源)、VideoSource(视频资源);
  • MediaStreamTrack:媒体资源轨,一个MediaStreamTrack对应一个MediaSource,创建媒体轨需要MediaSource,同样,它也有两个子类:AudioTrack(音频轨)对应AudioSource、VideoTrack(视频轨)对应VideoSource;
  • MediaStream:媒体流,一个媒体流可以添加多条AudioTrack和VideoTrack,一般来说我们对应分别只添加一条。

在使用WebRTC进行音视频通话时,需要构建一个PeerConnectionFactory,这是一个连接工厂,创建本地LocalMediaStream流及客户端PeerConnection等,都需要使用。构建大致代码如下:

//在创建PeerConnectionFactory之前,必须至少调用一次。不得在 PeerConnectionFactory 处于活动状态时调用。
PeerConnectionFactory.initialize(
    PeerConnectionFactory.InitializationOptions
        .builder(applicationContext).createInitializationOptions()
)
val eglBaseContext = EglBase.create().eglBaseContext
//视频编码工厂
val encoderFactory= DefaultVideoEncoderFactory(eglBaseContext, true, true)
//视频解码工厂
val decoderFactory = DefaultVideoDecoderFactory(eglBaseContext)
val audioDeviceModule = JavaAudioDeviceModule.builder(this)
            .setSamplesReadyCallback { audioSamples ->
                 //麦克风输入数据,即通话时LocalMediaStream的Audio的数据,pcm格式,通常用于录音。

            }
            .createAudioDeviceModule()
val peerConnectionFactory = PeerConnectionFactory.builder()
            .setVideoEncoderFactory(encoderFactory)
            .setVideoDecoderFactory(decoderFactory)
            .setAudioDeviceModule(audioDeviceModule)
            .createPeerConnectionFactory()

若你有需求WebRTC视频编码要启用H.264,请参考Android端WebRTC启用H264编码

MediaSource

WebRTC媒体资源数据来源基类。

AudioSource(音频源)

通过peerConnectionFactory.createAudioSource(MediaConstraints)创建,参数为媒体约束,大致如下:

    //音频源
    val audioSource = peerConnectionFactory.createAudioSource(createAudioConstraints())
    private fun createAudioConstraints(): MediaConstraints {
        val audioConstraints = MediaConstraints()
        //回声消除
        audioConstraints.mandatory.add(
            MediaConstraints.KeyValuePair(
                "googEchoCancellation",
                "true"
            )
        )
        //自动增益
        audioConstraints.mandatory.add(MediaConstraints.KeyValuePair("googAutoGainControl", "true"))
        //高音过滤
        audioConstraints.mandatory.add(MediaConstraints.KeyValuePair("googHighpassFilter", "true"))
        //噪音处理
        audioConstraints.mandatory.add(
            MediaConstraints.KeyValuePair(
                "googNoiseSuppression",
                "true"
            )
        )
        return audioConstraints
    }

事实上音频的输入、输出具体处理在JavaAudioDeviceModule

package org.webrtc.audio;
/**
 * AudioDeviceModule implemented using android.media.AudioRecord as input and
 * android.media.AudioTrack as output.
 */
public class JavaAudioDeviceModule implements AudioDeviceModule {
    ...
    /**
     * 本地麦克风采集的输入数据,android.media.AudioRecord
     */
    private final WebRtcAudioRecord audioInput;
    /**
     * 用于播放通话时对方的音频数据,android.media.AudioTrack
     */
    private final WebRtcAudioTrack audioOutput;
    ...
}

记录本地麦克风采集的音频数据WebRtcAudioRecord

package org.webrtc.audio;

import android.media.AudioRecord;

class WebRtcAudioRecord {
    ...
    private AudioRecord audioRecord;
    /**
     * 读取线程
     */
    private AudioRecordThread audioThread;

  /**
   * Audio thread which keeps calling ByteBuffer.read() waiting for audio
   * to be recorded. Feeds recorded data to the native counterpart as a
   * periodic sequence of callbacks using DataIsRecorded().
   * This thread uses a Process.THREAD_PRIORITY_URGENT_AUDIO priority.
   */
    private class AudioRecordThread extends Thread {
        private volatile boolean keepAlive = true;

        public AudioRecordThread(String name) {
            super(name);
        }

        public void run() {
            Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
             //确保在采集数据状态
            assertTrue(audioRecord.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING);
            //向外回调开始状态
            doAudioRecordStateCallback(AUDIO_RECORD_START);
            //存活状态
            while(keepAlive) {
                int bytesRead = audioRecord.read(byteBuffer, byteBuffer.capacity());
                if (bytesRead == byteBuffer.capacity()) {
                    if (microphoneMute) {
                        //如果麦克风静音,则清空数据
                        byteBuffer.clear();
                        byteBuffer.put(WebRtcAudioRecord.this.emptyBytes);
                    }

                    if (keepAlive) {
                        //调用native方法发送数据
                        nativeDataIsRecorded(nativeAudioRecord, bytesRead);
                    }
                    //向外回调音频数据,pcm格式
                    if (audioSamplesReadyCallback != null) {
                        byte[] data = Arrays.copyOfRange(.byteBuffer.array(), byteBuffer.arrayOffset(), byteBuffer.capacity() + byteBuffer.arrayOffset());
                        audioSamplesReadyCallback.onWebRtcAudioRecordSamplesReady(new AudioSamples(audioRecord.getAudioFormat(), audioRecord.getChannelCount(), audioRecord.getSampleRate(), data));
                    }
                } else {
                    String errorMessage = "AudioRecord.read failed: " + bytesRead;
                    Logging.e("WebRtcAudioRecordExternal", errorMessage);
                    if (bytesRead ==AudioRecord.ERROR_INVALID_OPERATION) {
                        keepAlive = false;
                        reportWebRtcAudioRecordError(errorMessage);
                    }
                }
            }

            try {
                if (audioRecord != null) {
                    audioRecord.stop();
                    //向外回调停止状态
                    doAudioRecordStateCallback(AUDIO_RECORD_STOP);
                }
            } catch (IllegalStateException e) {
                Logging.e("WebRtcAudioRecordExternal", "AudioRecord.stop failed: " + e.getMessage());
            }
        }
        /*
         * 结束读取
         */
        public void stopThread() {
            Logging.d("WebRtcAudioRecordExternal", "stopThread");
            this.keepAlive = false;
        }
    }
    ...
}

播放接收的音频数据WebRtcAudioTrack(这里不属于采集流程,提了一嘴):

package org.webrtc.audio;

import android.media.AudioTrack;

class WebRtcAudioTrack {
    ...
    private AudioTrack audioTrack;
    /**
     * 读取线程
     */
    private AudioTrackThread audioThread;

  /**
   * Audio thread which keeps calling AudioTrack.write() to stream audio.
   * Data is periodically acquired from the native WebRTC layer using the
   * nativeGetPlayoutData callback function.
   * This thread uses a Process.THREAD_PRIORITY_URGENT_AUDIO priority.
   */
    private class AudioTrackThread extends Thread {
        private volatile boolean keepAlive = true;

        public AudioTrackThread(String name) {
            super(name);
        }

        public void run() {
            Process.setThreadPriority(Process.THREAD_PRIORITY_URGENT_AUDIO);
            //确认状态
            assertTrue(audioTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING);
            //向外回调开始状态
            doAudioTrackStateCallback(AUDIO_TRACK_START);

            for(int sizeInBytes = byteBuffer.capacity(); keepAlive; byteBuffer.rewind()) {
                //调用native方法将播放数据写入到byteBuffer中
                nativeGetPlayoutData(nativeAudioTrack, sizeInBytes);
                assertTrue(sizeInBytes <= byteBuffer.remaining());
                if (speakerMute) {
                    //扬声器静音,清空数据
                    byteBuffer.clear();
                    byteBuffer.put(emptyBytes);
                    byteBuffer.position(0);
                }
                //写入到AudioTrack中,进行播放,pcm格式数据
                int bytesWritten = writeBytes(audioTrack, byteBuffer, sizeInBytes);
                if (bytesWritten != sizeInBytes) {
                    Logging.e("WebRtcAudioTrackExternal", "AudioTrack.write played invalid number of bytes: " + bytesWritten);
                    if (bytesWritten < 0) {
                        keepAlive = false;
                        reportWebRtcAudioTrackError("AudioTrack.write failed: " + bytesWritten);
                    }
                }
            }

        }

        private int writeBytes(AudioTrack audioTrack, ByteBuffer byteBuffer, int sizeInBytes) {
            return VERSION.SDK_INT >= 21 ? audioTrack.write(byteBuffer, sizeInBytes, 0) : audioTrack.write(byteBuffer.array(), byteBuffer.arrayOffset(), sizeInBytes);
        }

        public void stopThread() {
            Logging.d("WebRtcAudioTrackExternal", "stopThread");
            keepAlive = false;
        }
    }
    ...
}

上述的WebRtcAudioRecordWebRtcAudioTrack两个类都不是public,在WebRTC中只对外暴露出了本地麦克风采集数据,未暴露音频输出数据,如果你有获取WebRTC通话接收到的音频数据需求,请参考Android端WebRTC音视频通话录音-获取音频输出数据

VideoSource(视频源)

VideoSource是一个视频源,用于二次处理视频数据(VideoProcessor)和调整大小、格式和发送视频数据(具体实现是jni的包装类NativeAndroidVideoTrackSource)。
VideoSource视频的获取需要通过CapturerObserver,视频捕捉器可以通过videoSource.getCapturerObserver()获取捕捉器观察者,将数据传给VideoSource。
先看一下VideoSource创建使用流程:

    //视频捕捉器
    val videoCapturer = createVideoCapture(context)
    videoCapturer?.let{ capturer->
        val videoSource = peerConnectionFactory.createVideoSource(capture.isScreencast)
        //使用 SurfaceTexture 创建 WebRTC VideoFrames 的帮助类。为了创建 WebRTC VideoFrames,渲染到 SurfaceTexture。帧将传递给侦听器。
        val surfaceTextureHelper =
                SurfaceTextureHelper.create("surface_texture_thread", eglBaseContext)
            //将surfaceTextureHelper和CapturerObserver传给VideoCapturer
            capture.initialize(surfaceTextureHelper, context, videoSource.capturerObserver)
    }
    /**
     * 创建相机视频捕捉器
     */
    private fun createVideoCapture(context: Context): CameraVideoCapturer? {
        val enumerator: CameraEnumerator = if (Camera2Enumerator.isSupported(context)) {
            Camera2Enumerator(context)
        } else {
            Camera1Enumerator()
        }

        for (name in enumerator.deviceNames) {
            if (enumerator.isFrontFacing(name)) {
                return enumerator.createCapturer(name, null)
            }
        }
        for (name in enumerator.deviceNames) {
            if (enumerator.isBackFacing(name)) {
                return enumerator.createCapturer(name, null)
            }
        }
        return null
    }

我们来看一下VideoSource关键代码:

package org.webrtc;

public class VideoSource extends MediaSource {
    private final NativeAndroidVideoTrackSource nativeAndroidVideoTrackSource;
    /**
     * 视频处理器,可自行对图像数据进行二次处理;
     * 如:添加水印、视频旋转、剪裁等。
     */
    @Nullable
    private VideoProcessor videoProcessor;
    private final Object videoProcessorLock = new Object();
    private final CapturerObserver capturerObserver = new CapturerObserver() {
        /**
         * 开始捕捉
         */
        @Override
        public void onCapturerStarted(boolean success) {
            nativeAndroidVideoTrackSource.setState(success);
            synchronized(videoProcessorLock) {
                isCapturerRunning = success;
                if (videoProcessor != null) {
                    videoProcessor.onCapturerStarted(success);
                }
            }
        }
        /**
         * 停止捕捉
         */
        @Override
        public void onCapturerStopped() {
            nativeAndroidVideoTrackSource.setState(false);
            synchronized(videoProcessorLock) {
                isCapturerRunning = false;
                if (videoProcessor != null) {
                    videoProcessor.onCapturerStopped();
                }
            }
        }
        /**
         * 捕捉的视频数据
         * @param frame 视频数据
         */
        @Override
        public void onFrameCaptured(VideoFrame frame) {
            //应在传送任何帧之前调用此函数,以确定是否应丢弃该帧或裁剪、缩放参数是什么。如果{FrameAdaptationParameters#drop}为true,则应丢弃该帧,否则应在调用 onFrameCaptured() 之前根据帧适配参数对帧进行适配。
            FrameAdaptationParameters parameters = nativeAndroidVideoTrackSource.adaptFrame(frame);
            synchronized(videoProcessorLock) {
                if (videoProcessor != null) {
                    //如果添加了视频处理器,自行处理,直接return。
                    videoProcessor.onFrameCaptured(frame, parameters);
                    return;
                }
            }
            //根据参数调整帧数据,此步骤会调用{VideoFrame.getBuffer().cropAndScale()}进行裁剪缩放。
            VideoFrame adaptedFrame = VideoProcessor.applyFrameAdaptationParameters(frame, parameters);
            if (adaptedFrame != null) {
                //如果不为空,则通过native方法将帧数据传递过去
                nativeAndroidVideoTrackSource.onFrameCaptured(adaptedFrame);
                adaptedFrame.release();
            }
        }
    };
    /**
     * 调整输出格式
     */
    public void adaptOutputFormat(int width, int height, int fps) {
        int maxSide = Math.max(width, height);
        int minSide = Math.min(width, height);
        this.adaptOutputFormat(maxSide, minSide, minSide, maxSide, fps);
    }

    public void adaptOutputFormat(int landscapeWidth, int landscapeHeight, int portraitWidth, int portraitHeight, int fps) {
        this.adaptOutputFormat(new VideoSource.AspectRatio(landscapeWidth, landscapeHeight), landscapeWidth * landscapeHeight, new VideoSource.AspectRatio(portraitWidth, portraitHeight), portraitWidth * portraitHeight, fps);
    }

    public void adaptOutputFormat(VideoSource.AspectRatio targetLandscapeAspectRatio, @Nullable Integer maxLandscapePixelCount, VideoSource.AspectRatio targetPortraitAspectRatio, @Nullable Integer maxPortraitPixelCount, @Nullable Integer maxFps) {
        nativeAndroidVideoTrackSource.adaptOutputFormat(targetLandscapeAspectRatio, maxLandscapePixelCount, targetPortraitAspectRatio, maxPortraitPixelCount, maxFps);
    }

    /**
     * 设置视频处理器
     */
    public void setVideoProcessor(@Nullable VideoProcessor newVideoProcessor) {
        synchronized(videoProcessorLock) {
            //判断之前是否设置过
            if (videoProcessor != null) {
                videoProcessor.setSink(null);
                if (isCapturerRunning) {
                    videoProcessor.onCapturerStopped();
                }
            }

            videoProcessor = newVideoProcessor;
            if (newVideoProcessor != null) {
                newVideoProcessor.setSink(new VideoSink() {
                    @Override
                    public void onFrame(VideoFrame frame) {
                        //这里返回的是自行处理好的数据,再通过native方法将帧数据传递过去,和上面一样。
                        runWithReference(() -> {
                          nativeAndroidVideoTrackSource.onFrameCaptured(frame);
                        });
                    }
                });
                if (isCapturerRunning) {
                    newVideoProcessor.onCapturerStarted(true);
                }
            }
        }
    }

    /**
     * 获取捕捉器观察者
     */
    public CapturerObserver getCapturerObserver() {
        return capturerObserver;
    }
}

VideoProcessor

视频处理器,可自行在原有视频数据上进行二次处理,使用例子如下:

/**
 * 设置视频角度
 */
class RotationVideoProcessor(
    /**
     * 视频角度
     * 必须是0、90、180、270其中之一
     */
    private val rotation: Int
) : VideoProcessor {
    private var mSink: VideoSink? = null

    override fun setSink(sink: VideoSink?) {
        mSink = sink
    }

    override fun onCapturerStarted(success: Boolean) {

    }

    override fun onCapturerStopped() {
    }


    override fun onFrameCaptured(frame: VideoFrame) {
        mSink?.onFrame(VideoFrame(frame.buffer, rotation, frame.timestampNs))
    }
    
}
...
videoSource.setVideoProcessor(RotationVideoProcessor(90))

CapturerObserver

用于观察捕获器的接口。通过VideoSource.getCapturerObserver()获取CapturerObserver,然后传递给VideoCapturer.initialize(SurfaceTextureHelper, Context, CapturerObserver)

VideoCapturer

所有视频捕捉器的基本接口;
Android端WebRTC视频采集提供多种方案,如相机(CameraCapturer)、桌面(ScreenCapturerAndroid)、文件(FileVideoCapturer)。这3个都是VideoCapturer的具体实现类。

CameraCapturer

这个是使用WebRTC最经常使用到的。
一般使用到CameraCapturer都是通过CameraEnumerator进行创建的,同时CameraEnumerator也支持获取相机的一些参数和配置,具体实现有Camera1Enumerator和Camera2Enumerator。
CameraCapturer是对采集操作的封装,对外提供了开始采集、结束采集、设置采集参数等方法,负责创建CameraSession,CameraSession负责Camera相关事件的回调和视频帧数据回调,CameraSession的具体实现有Camera1Session和Camera2Session,分别对应CameraCapturer的两个子类:Camera1CapturerCamera2Capturer

ScreenCapturerAndroid

用于将屏幕内容捕获为视频流。捕获使用的是MediaProjection,显示在SurfaceTexture,配合使用SurfaceTextureHelper生成纹理数据,在Android5.0以上的设备才支持使用。

FileVideoCapturer

将文件转为视频流,文件格式需要为.y4m

VideoFrame

视频本质上是由一帧一帧图像组成,就是一个buffer,在WebRTC上统一封装成VideoFrame,VideoFrame中包含三个值:帧数据buffer、图像旋转角度、时间戳;

public class VideoFrame implements RefCounted {
    /**
     * 帧数据buffer
     */
    private final VideoFrame.Buffer buffer;
    /**
     * 角度
     */
    private final int rotation;
    /**
     * 时间戳
     */
    private final long timestampNs;
}
VideoFrame.Buffer

是一个Basic接口,不同的数据格式都需实现此接口。用于实现图像存储介质,可能是 OpenGL 纹理或包含 I420 数据的内存区域,由于视频缓冲区可以在多个 VideoSink 之间共享,因此需要引用计数,一旦所有引用都消失了,缓冲区需要返回到 VideoSource。
所有实现还必须实现 toI420() 方法, I420 是最广泛接受的格式。

    public interface Buffer extends RefCounted {
        /**
         * Resolution of the buffer in pixels.
         */
        @CalledByNative("Buffer")
        int getWidth();

        @CalledByNative("Buffer")
        int getHeight();

        /**
         * Returns a memory-backed frame in I420 format. If the pixel data is in another format, a
         * conversion will take place. All implementations must provide a fallback to I420 for
         * compatibility with e.g. the internal WebRTC software encoders.
         */
        @CalledByNative("Buffer")
        I420Buffer toI420();

        @Override
        @CalledByNative("Buffer")
        void retain();

        @Override
        @CalledByNative("Buffer")
        void release();

        /**
         * Crops a region defined by |cropx|, |cropY|, |cropWidth| and |cropHeight|. Scales it to size
         * |scaleWidth| x |scaleHeight|.
         */
        @CalledByNative("Buffer")
        Buffer cropAndScale(
                int cropX, int cropY, int cropWidth, int cropHeight, int scaleWidth, int scaleHeight);
    }

在WebRTC源码中定义了几种YUV格式的buffer和Texture格式buffer:

  • NV12Buffer
    这种格式只在视频解码的时候才可能会用到,具体在AndroidVideoDecoder.copyNV12ToI420Buffer()。这里不做过多讨论。
public class NV12Buffer implements VideoFrame.Buffer {
    private final int width;
    private final int height;
    private final int stride;
    private final int sliceHeight;
    private final ByteBuffer buffer;
    private final RefCountDelegate refCountDelegate;

    @Override
    public VideoFrame.Buffer cropAndScale(
        int cropX, int cropY, int cropWidth, int cropHeight, int scaleWidth, int scaleHeight) {
      //注意:Nv12Buffer 在经过剪裁和缩放之后会直接转变成I420Buffer。
      JavaI420Buffer newBuffer = JavaI420Buffer.allocate(scaleWidth, scaleHeight);
      nativeCropAndScale(cropX, cropY, cropWidth, cropHeight, scaleWidth, scaleHeight, buffer, width,
          height, stride, sliceHeight, newBuffer.getDataY(), newBuffer.getStrideY(),
          newBuffer.getDataU(), newBuffer.getStrideU(), newBuffer.getDataV(), newBuffer.getStrideV());
      return newBuffer;
    }

    private static native void nativeCropAndScale(int cropX, int cropY, int cropWidth, int cropHeight,
        int scaleWidth, int scaleHeight, ByteBuffer src, int srcWidth, int srcHeight, int srcStride,
        int srcSliceHeight, ByteBuffer dstY, int dstStrideY, ByteBuffer dstU, int dstStrideU, ByteBuffer dstV, int dstStrideV);
}
  • NV21Buffer
    这种格式用的也不多,只有在使用Camera1Capturer时,才有可能返回,Camera1Capturer支持的格式有两种:NV21Buffer和TextureBuffer,有参数可动态配置;
public class Camera1Enumerator implements CameraEnumerator {
    ...
    /**
     * 是否从SurfaceTexture上捕获纹理数据;默认:true。
     * 这个参数的值最终会传递到{Camera1Session#captureToTexture}
     * true - > TextureBuffer
     * false - > NV21Buffer
     */
    private final boolean captureToTexture;

    public Camera1Enumerator() {
        this( /* captureToTexture */ true);
    }
    /**
     * @param captureToTexture true - > TextureBuffer
     *                         false - > NV21Buffer
     */
    public Camera1Enumerator(boolean captureToTexture) {
        this.captureToTexture = captureToTexture;
    }
    ...
}
public class NV21Buffer implements VideoFrame.Buffer {
    /**
      * nv21数据
      */
    private final byte[] data;
    private final int width;
    private final int height;
    private final RefCountDelegate refCountDelegate;

    @Override
    public VideoFrame.Buffer cropAndScale(
        int cropX, int cropY, int cropWidth, int cropHeight, int scaleWidth, int scaleHeight) {
      //注意:Nv21Buffer 在经过剪裁和缩放之后会直接转变成I420Buffer。
      JavaI420Buffer newBuffer = JavaI420Buffer.allocate(scaleWidth, scaleHeight);
      nativeCropAndScale(cropX, cropY, cropWidth, cropHeight, scaleWidth, scaleHeight, buffer, width,
          height, stride, sliceHeight, newBuffer.getDataY(), newBuffer.getStrideY(),
          newBuffer.getDataU(), newBuffer.getStrideU(), newBuffer.getDataV(), newBuffer.getStrideV());
      return newBuffer;
    }

    private static native void nativeCropAndScale(int cropX, int cropY, int cropWidth, int cropHeight,
        int scaleWidth, int scaleHeight, ByteBuffer src, int srcWidth, int srcHeight, int srcStride,
        int srcSliceHeight, ByteBuffer dstY, int dstStrideY, ByteBuffer dstU, int dstStrideU, ByteBuffer dstV, int dstStrideV);
}
  • TextureBuffer
    TextureBuffer 是一个接口,具体实现类:TextureBufferImpl;
    以 OES 或 RGB 格式存储为单个纹理的接口,在WebRTC中使用的是OES;
    Camera2CapturerScreenCapturerAndroid仅支持返回此数据类型;
    Camera1Capturer中默认返回的也是此数据类型。
/**
 * Interface for buffers that are stored as a single texture, either in OES or RGB format.
 */
public interface TextureBuffer extends Buffer {
    enum Type {
        OES(GLES11Ext.GL_TEXTURE_EXTERNAL_OES),
        RGB(GLES20.GL_TEXTURE_2D);
        private final int glTarget;
        private Type(final int glTarget) {
            this.glTarget = glTarget;
        }
        public int getGlTarget() {
             return glTarget;
        }
    }
    Type getType();
    int getTextureId();
   /**
    * Retrieve the transform matrix associated with the frame. This transform matrix maps 2D
    * homogeneous coordinates of the form (s, t, 1) with s and t in the inclusive range [0, 1] to
    * the coordinate that should be used to sample that location from the buffer.
    */
    Matrix getTransformMatrix();
}

public class TextureBufferImpl implements VideoFrame.TextureBuffer {
    // This is the full resolution the texture has in memory after applying the transformation matrix
    // that might include cropping. This resolution is useful to know when sampling the texture to
    // avoid downscaling artifacts.
    private final int unscaledWidth;
    private final int unscaledHeight;
    // This is the resolution that has been applied after cropAndScale().
    private final int width;
    private final int height;
    private final Type type;
    private final int id;
    private final Matrix transformMatrix;
    private final Handler toI420Handler;
    private final YuvConverter yuvConverter;
    private final RefCountDelegate refCountDelegate;
}
  • I420Buffer
    I420Buffer是一个接口,具体实现类:JavaI420Buffer和WrappedNativeI420Buffer;常用的是JavaI420Buffer。
    Buffer的所有子类都需要实现to420()方法,都要支持转成I420Buffer;
    FileVideoCapturer返回的是I420Buffer数据。
/**
 * Interface for I420 buffers.
 */
public interface I420Buffer extends VideoFrame.Buffer {
    /**
     * Returns a direct ByteBuffer containing Y-plane data. The buffer capacity is at least
     * getStrideY() * getHeight() bytes. The position of the returned buffer is ignored and must
     * be 0. Callers may mutate the ByteBuffer (eg. through relative-read operations), so
     * implementations must return a new ByteBuffer or slice for each call.
     */
    @CalledByNative("I420Buffer")
    ByteBuffer getDataY();

    /**
     * Returns a direct ByteBuffer containing U-plane data. The buffer capacity is at least
     * getStrideU() * ((getHeight() + 1) / 2) bytes. The position of the returned buffer is ignored
     * and must be 0. Callers may mutate the ByteBuffer (eg. through relative-read operations), so
     * implementations must return a new ByteBuffer or slice for each call.
     */
    @CalledByNative("I420Buffer")
    ByteBuffer getDataU();

    /**
     * Returns a direct ByteBuffer containing V-plane data. The buffer capacity is at least
     * getStrideV() * ((getHeight() + 1) / 2) bytes. The position of the returned buffer is ignored
     * and must be 0. Callers may mutate the ByteBuffer (eg. through relative-read operations), so
     * implementations must return a new ByteBuffer or slice for each call.
     */
    @CalledByNative("I420Buffer")
    ByteBuffer getDataV();

    @CalledByNative("I420Buffer")
    int getStrideY();

    @CalledByNative("I420Buffer")
    int getStrideU();

    @CalledByNative("I420Buffer")
    int getStrideV();
}


/** Implementation of VideoFrame.I420Buffer backed by Java direct byte buffers. */
public class JavaI420Buffer implements VideoFrame.I420Buffer {
  private final int width;
  private final int height;
  private final ByteBuffer dataY;
  private final ByteBuffer dataU;
  private final ByteBuffer dataV;
  private final int strideY;
  private final int strideU;
  private final int strideV;
  private final RefCountDelegate refCountDelegate;
}

MediaStreamTrack

媒体资源轨,一个MediaStreamTrack对应一个MediaSource。这个使用比较简单。

AudioTrack

传入AudioSource(音频资源)用于创建音频轨。

val audioTrack = peerConnectionFactory.createAudioTrack("local_audio_track", audioSource)

VideoTrack

传入VideoSource(视频资源)用于创建视频轨。

val videoTrack = peerConnectionFactory.createVideoTrack("local_video_track", videoSource)
...
val svr = findViewById<SurfaceViewRenderer>(R.id.srv)
svr.init(eglBaseContext, null)
//将视频数据展示在SurfaceViewRenderer上
videoTrack.addSink(svr)
...

MediaStream

首先要创建一条MediaStream,将音视频轨添加进MediaStream即可,也比较简单。

val medisStream = peerConnectionFactory.createLocalMediaStream("local_stream")
medisStream.addTrack(audioTrack)
medisStream.addTrack(videoTrack)

总结

至此,Android端本地音视频采集流程基本分析结束,可以看出重点在音视频资源的采集,音视频轨和媒体流创建和使用都很简单。如果你没有特殊需求的话使用源码中直接使用的即可。当然你如果有像录音,视频的二次处理等需求,那你就要对各种数据格式了解比较透彻,本文中也提到了音视频的采集部分和视频二次处理方式,下面就是按照自己的需求去修改。后续我也会出一篇关于视频二次处理的博客。

如有错误或修改意见,欢迎指正交流。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 196,264评论 5 462
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 82,549评论 2 373
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 143,389评论 0 325
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,616评论 1 267
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,461评论 5 358
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,351评论 1 273
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,776评论 3 387
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,414评论 0 255
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,722评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,760评论 2 314
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,537评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,381评论 3 315
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,787评论 3 300
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,030评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,304评论 1 252
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,734评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,943评论 2 336

推荐阅读更多精彩内容