本文基于Android 9.0代码进行分析。
1. MediaRecorder整体框架
1.1 整体层级关系图
在运行时,整个MediaRecorder大致上可以分为Client和Server两个部分,分别在两个进程中运行,它们之间使用Binder机制实现IPC通信。
MediaPlayerService是多媒体框架中非常重要的一个服务,从框架图中可以看出,MediaRecorder是客户端,MediaPlayerService和MediaRecorderClient是服务器端,MediaPlayerService实现了IMediaPlayerService(接口类)定义的业务逻辑。MediaRecorderClient实现了IMediaRecorder(接口类)定义的业务逻辑,其主要功能包括prepare、start、pause、resume、stop、reset、release等。
C++/JNI层回调事件(JNI层:mr->setListener(listener):将回调接口类MediaRecorderListener设置给MediaRecorder)通知JAVA层是使用JNIMediaRecorderListener::notify(int msg, int ext1, int ext2),该方法通过调用MediaRecoder.java类中private static void postEventFromNative(Object mediarecorder_ref, int what, int arg1, int arg2, Object obj)方法在JNI中的方法句柄,把Native事件回调到Java层,然后使用EventHandler.java类 post 事件回到主线层。
1.2 状态流转图
主要根据各个状态和触发的条件进行状态转换,图上是一个状态机的实现的说明已经足够了,不过值得注意的是还有两个状态函数是pause()【暂停】和resume()【恢复】录制功能,图上没有的。
2.流程实现浅析
该部分主要是根据MediaRecorder在使用中的主要的各模块和代码层的调用流程来进行分析整体架构中涉及到的主要类关系及其实现关系等。主要根据一个基本的录制音视频的调用流程开始分析:
private void initRecord() throws IOException {
mMediaRecorder = new MediaRecorder();
try {
mMediaRecorder.reset();
if (mCamera != null)
mMediaRecorder.setCamera(mCamera);
mMediaRecorder.setOnErrorListener(this);
mMediaRecorder.setPreviewDisplay(mSurfaceHolder.getSurface());
mMediaRecorder.setVideoSource(VideoSource.CAMERA);// 视频源
mMediaRecorder.setAudioSource(AudioSource.DEFAULT);// 音频源率,然后就清晰了
mMediaRecorder.setVideoEncodingBitRate(5*1024*1024);
mMediaRecorder.setOrientationHint(90);// 输出旋转90度,保持竖屏录制
mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);// 视频输出格式
mMediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);// 音频格式
mMediaRecorder.setVideosetAudioEncodersetAudioEncodersetAudioEncoderEncoder(MediaRecorder.VideoEncoder.MPEG_4_SP);// 视频录制格式
// 设置视频录制的分辨率。必须放在设置编码和格式的后面,否则报错
mMediaRecorder.setVideoSize(320, 240);
// 设置录制的视频帧率。必须放在设置编码和格式的后面,否则报错
mMediaRecorder.setVideoFrameRate(20);
// mediaRecorder.setMaxDuration(Constant.MAXVEDIOTIME * 1000);
mMediaRecorder.setOutputFile(mVecordFile.getAbsolutePath());
mMediaRecorder.prepare();
} catch (Exception e) {
e.printStackTrace();
releaseRecord();
}
try {
mMediaRecorder.start();
} catch (IllegalStateException e) {
e.printStackTrace();
} catch (RuntimeException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
1、创建:new MediaRecorder();
2、设置Camera:mRecorder.setCamera();
3、设置音频源(采集方式):mRecorder.setAudioSource();
4、设置视频源(采集方式):mRecorder.setVideoSource();
5、设置文件的输出格式:mRecorder.setOutputFormat();
6、设置Audio的编码格式(生成对应的编码器):mRecorder.setAudioEncoder();
7、设置Video的编码格式(生成对应的编码器):mRecorder.setVideoEncoder();
8、设置录制的视频编码比特率(每秒编码多少位bit):mRecorder.setVideoEncodingBitRate();
9、设置录制的视频帧率:mRecorder.setVideoFrameRate();
10、设置要捕获的视频的宽度和高度:mRecorder.setVideoSize();
11、设置记录会话的最大持续时间(毫秒):mRecorder.setMaxDuration();
12、设置一个Surface进行预览显示:mRecorder.setPreviewDisplay();
13、设置输出文件路径:mRecorder.setOutputFile();
14、准备录制:mRecorder.prepare();
15、开始录制:mRecorder.start();
16、暂停或恢复录制:mRecorder.pause()/resume();
17、停止录制:mRecorder.stop();
18、重置Recorder:mRecorder.reset();
19、释放Recorder资源:mRecorder.release();
2.1 new MediaRecorder()
- 创建实例
public MediaRecorder() {
// 定义一个Looper,当前线程或主线程中的Looper实现者,主要用于JNI层回调时切换到当前App端线程中
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else {
mEventHandler = null;
}
mChannelCount = 1;
String packageName = ActivityThread.currentPackageName();
/* Native setup requires a weak reference to our object.
* It's easier to create it here than in C++.
*/
// 创建MediaPlayer,在Native实现,使用弱引用
native_setup(new WeakReference<MediaRecorder>(this), packageName,
ActivityThread.currentOpPackageName());
}
----------------------
// 静态代码中,加载和链接media_jni.so文件
static {
System.loadLibrary("media_jni");
native_init();
}
- 进入JNI层android_media_MediaRecorder.cpp的native_init()方法:
static void
android_media_MediaRecorder_native_init(JNIEnv *env)
{
// JNIEnv该类可以理解为一个万能指针表,通过操作符(->)访问JNI中的函数
// 类的class句柄
jclass clazz;流程流程流程
// 通过JNI层调用并获取JAVA层MediaRecorder对象
clazz = env->FindClass("android/media/MediaRecorder");
if (clazz == NULL) {
return;
}
// 获取java层成员变量mNativeContext,long类型,实际对应一个内存地址即Native层的MediaRecorder对象实例指针变量值转换的内存地址,用于缓存
fields.context = env->GetFieldID(clazz, "mNativeContext", "J");
if (fields.context == NULL) {
return;
}
// 获取java层成员变量mSurface
fields.surface = env->GetFieldID(clazz, "mSurface", "Landroid/view/Surface;");
if (fields.surface == NULL) {
return;
}
jclass surface = env->FindClass("android/view/Surface");
if (surface == NULL) {
return;
}
// 获取并存储一个回调JAVA层的静态回调函数,将native事件回调到java层
fields.post_event = env->GetStaticMethodID(clazz, "postEventFromNative",
"(Ljava/lang/Object;IIILjava/lang/Object;)V");
if (fields.post_event == NULL) {
return;
}
clazz = env->FindClass("java/util/ArrayList");
if (clazz == NULL) {
return;
}
gArrayListFields.add = env->GetMethodID(clazz, "add", "(Ljava/lang/Object;)Z");
gArrayListFields.classId = static_cast<jclass>(env->NewGlobalRef(clazz));
}
JNI层对应调用的Java 层静态方法回调函数,将native事件回调到java层,用弱引用指向原生的MediaRecorder对象,来保证native代码安全的,并使用Handler机制切换线程到主线程中。
private static void postEventFromNative(Object mediarecorder_ref,
int what, int arg1, int arg2, Object obj)
{
MediaRecorder mr = (MediaRecorder)((WeakReference)mediarecorder_ref).get();
if (mr == null) {
return;
}流程
if (mr.mEventHandler != null) {
Message m = mr.mEventHandler.obtainMessage(what, arg1, arg2, obj);
mr.mEventHandler.sendMessage(m);
}
}
- 前面构造函数中:native_setup实现函数分析
static void
android_media_MediaRecorder_native_setup(JNIEnv *env, jobject thiz, jobject weak_this,
jstring packageName, jstring opPackageName)
{
ALOGV("setup");
ScopedUtfChars opPackageNameStr(env, opPackageName);
// 创建JNI层MediaRecorder实例
sp<MediaRecorder> mr = new MediaRecorder(String16(opPackageNameStr.c_str()));
if (mr == NULL) {
jniThrowException(env, "java/lang/RuntimeException", "Out of memory");
return;
}
if (mr->initCheck() != NO_ERROR) {
jniThrowException(env, "java/lang/RuntimeException", "Unable to initialize media recorder");
return;
}
// create new listener and give it to MediaRecorder
// 回调事件监听并设置,如此java层的回调监听就能起作用
sp<JNIMediaRecorderListener> listener = new JNIMediaRecorderListener(env, thiz, weak_this);
mr->setListener(listener);
// Convert client name jstring to String16
const char16_t *rawClientName = reinterpret_cast<const char16_t*>(
env->GetStringChars(packageName, NULL));
jsize rawClientNameLen = env->GetStringLength(packageName);
String16 clientName(rawClientName, rawClientNameLen);
env->ReleaseStringChars(packageName,
reinterpret_cast<const jchar*>(rawClientName));
// pass client package name for permissions tracking
mr->setClientName(clientName);
// 该方法就是缓存当前创建的native层的MediaRecorder对象的指针变量值,用于缓存取用
setMediaRecorder(env, thiz, mr);
}
如此便设置了一些监听并创建了C++层对应实现的MediaRecorder对象。
- 再看C++层MediaRecorder构造函数
MediaRecorder::MediaRecorder(const String16& opPackageName) : mSurfaceMediaSource(NULL)
{
ALOGV("constructor");
// 前面所讲通过Binder机制获取到BpMediaPlayerService代理对象
const sp<IMediaPlayerService> service(getMediaPlayerService());
if (service != NULL) {
// 以此来创建一个BpMediaRecorder代理对象,用此mMediaRecorder变量通过Binder机制来获取操作 MediaRecorderClient服务器端的实现业务功能。
mMediaRecorder = service->createMediaRecorder(opPackageName);
}
if (mMediaRecorder != NULL) {
// 初始化状态录制为闲置状态
mCurrentState = MEDIA_RECORDER_IDLE;
}
// 清除一些设置
doCleanUp();
}
- 查看服务器端的实现:创建了MediaRecorderClient【Bn对象】
sp<IMediaRecorder> MediaPlayerService::createMediaRecorder(const String16 &opPackageName)
{
pid_t pid = IPCThreadState::self()->getCallingPid();
sp<MediaRecorderClient> recorder = new MediaRecorderClient(this, pid, opPackageName);
wp<MediaRecorderClient> w = recorder;
Mutex::Autolock lock(mLock);
mMediaRecorderClients.add(w);
ALOGV("Create new media recorder client from pid %d", pid);
return recorder;
}
- 查看MediaRecorderClient构造函数:
MediaRecorderClient::MediaRecorderClient(const sp<MediaPlayerService>& service, pid_t pid,
const String16& opPackageName)
{
ALOGV("Client constructor");
mPid = pid;
mRecorder = new StagefrightRecorder(opPackageName);
mMediaPlayerService = service;
}
- 再看StagefrightRecorder构造函数:就是一些初始化操作,因此到此基本可以确定录制大部分功能都最终由它实现和管理的
StagefrightRecorder::StagefrightRecorder(const String16 &opPackageName)
: MediaRecorderBase(opPackageName),
mWriter(NULL),
mOutputFd(-1),
mAudioSource((audio_source_t)AUDIO_SOURCE_CNT), // initialize with invalid value
mVideoSource(VIDEO_SOURCE_LIST_END),
mStarted(false),
mSelectedDeviceId(AUDIO_PORT_HANDLE_NONE),
mDeviceCallbackEnabled(false) {
ALOGV("Constructor");
mAnalyticsDirty = false;
reset();
}
2.2 设置Camera: setCamera()
MediaRecorder.java ==>android_media_MediaRecorder.cpp
==> MediaRecorder.cpp ==> MediaRecorderClient.cpp ==> StagefrightRecorder.cpp
- Java直接调用native层接口
public native void setCamera(Camera c);
- JNI层接口
static void android_media_MediaRecorder_setCamera(JNIEnv* env, jobject thiz, jobject camera)
{
// we should not pass a null camera to get_native_camera() call.
if (camera == NULL) {
jniThrowNullPointerException(env, "camera object is a NULL pointer");
return;
}
// 通过java层Camera对象获取Native层中的Camera对象实例
sp<Camera> c = get_native_camera(env, camera, NULL);
if (c == NULL) {
// get_native_camera will throw an exception in this case
return;
}
// 获取缓存的JNI层MediaRecorder实例对象
sp<MediaRecorder> mr = getMediaRecorder(env, thiz);
if (mr == NULL) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return;
}
// mr->setCamera() 该方法设置Native层的Camera及其录制代理对象实例传入JNI层的MediaRecorder中,
process_media_recorder_call(env, mr->setCamera(c->remote(), c->getRecordingProxy()),
"java/lang/RuntimeException", "setCamera failed.");
}
- 然后看下:
status_t MediaRecorder::setCamera(const sp<hardware::ICamera>& camera,
const sp<ICameraRecordingProxy>& proxy)
{
ALOGV("setCamera(%p,%p)", camera.get(), proxy.get());
if (mMediaRecorder == NULL) {
ALOGE("media recorder is not initialized yet");
return INVALID_OPERATION;
}
if (!(mCurrentState & MEDIA_RECORDER_IDLE)) {
ALOGE("setCamera called in an invalid state(%d)", mCurrentState);
return INVALID_OPERATION;
}
// 最终调用了前面MediaRecorder构造函数中获取到的Bp代理对象
status_t ret = mMediaRecorder->setCamera(camera, proxy);
if (OK != ret) {
ALOGV("setCamera failed: %d", ret);
mCurrentState = MEDIA_RECORDER_ERROR;
return ret;
}
return ret;
}
- 然后查看其MediaRecorderClient服务器端的对应业务实现:
status_t MediaRecorderClient::setCamera(const sp<hardware::ICamera>& camera,
const sp<ICameraRecordingProxy>& proxy)
{
ALOGV("setCamera");
Mutex::Autolock lock(mLock);
if (mRecorder == NULL) {
ALOGE("recorder is not initialized");
return NO_INIT;
}
// mRecorder前面分析出是StageFrightRecorder实例。
return mRecorder->setCamera(camera, proxy);
}
- 最终还是调用到了StageFrightRecorder中对应的该方法:
status_t StagefrightRecorder::setCamera(const sp<hardware::ICamera> &camera,
const sp<ICameraRecordingProxy> &proxy) {
ALOGV("setCamera");
if (camera == 0) {
ALOGE("camera is NULL");
return BAD_VALUE;
}
if (proxy == 0) {
ALOGE("camera proxy is NULL");
return BAD_VALUE;
}
mCamera = camera;
mCameraProxy = proxy; // 对应Camera::RecordingProxy
return OK;
}
2.3 设置音频源(采集方式):setAudioSource()
调用接口需要权限:android.permission.RECORD_AUDIO
同上面的分析,最终都会调用到了StageFright框架层的StageFrightRecorder实现中,如下直接将最终对应的调用关系写出:
将音频来源采集方法枚举值存放下来,后面创建真正的音频源实现。
status_t StagefrightRecorder::setAudioSource(audio_source_t as) {
ALOGV("setAudioSource: %d", as);
if (as < AUDIO_SOURCE_DEFAULT ||
(as >= AUDIO_SOURCE_CNT && as != AUDIO_SOURCE_FM_TUNER)) {
ALOGE("Invalid audio source: %d", as);
return BAD_VALUE;
}
if (as == AUDIO_SOURCE_DEFAULT) {
mAudioSource = AUDIO_SOURCE_MIC;
} else {
mAudioSource = as;
}
return OK;
}
2.4 设置视频源(采集方式):setVideoSource()
调用接口需要权限:android.permission.CAMERA
与音频源类似的设置,缓存视频来源选择的枚举类型
status_t StagefrightRecorder::setVideoSource(video_source vs) {
ALOGV("setVideoSource: %d", vs);
if (vs < VIDEO_SOURCE_DEFAULT ||
vs >= VIDEO_SOURCE_LIST_END) {
ALOGE("Invalid video source: %d", vs);
return BAD_VALUE;
}
if (vs == VIDEO_SOURCE_DEFAULT) {
mVideoSource = VIDEO_SOURCE_CAMERA;
} else {
mVideoSource = vs;
}
return OK;
}
2.5 设置文件的输出格式:setOutputFormat()
最终调用了该方法,缓存上层设置文件的输出格式枚举类型,用于后面编码最终生成的文件。
status_t StagefrightRecorder::setOutputFormat(output_format of) {
ALOGV("setOutputFormat: %d", of);
if (of < OUTPUT_FORMAT_DEFAULT ||
of >= OUTPUT_FORMAT_LIST_END) {
ALOGE("Invalid output format: %d", of);
return BAD_VALUE;
}
if (of == OUTPUT_FORMAT_DEFAULT) {
mOutputFormat = OUTPUT_FORMAT_THREE_GPP;
} else {
mOutputFormat = of;
}
return OK;
}
2.6 设置Audio的编码格式:setAudioEncoder()
最终调用了并将用户设置的音频编码器对应的枚举值((audio_encoder)保存下来,后面开始编码时创建真正的编码器并编码。
status_t StagefrightRecorder::setAudioEncoder(audio_encoder ae) {
ALOGV("setAudioEncoder: %d", ae);
if (ae < AUDIO_ENCODER_DEFAULT ||
ae >= AUDIO_ENCODER_LIST_END) {
ALOGE("Invalid audio encoder: %d", ae);
return BAD_VALUE;
}
if (ae == AUDIO_ENCODER_DEFAULT) {
mAudioEncoder = AUDIO_ENCODER_AMR_NB;
} else {
mAudioEncoder = ae;
}
return OK;
}
2.7 设置Video的编码格式:setVideoEncoder()
与音频类似,直接调用了StageFrightRecorder的对应方法,并将视频编码器对应的枚举值(video_encoder)缓存下来,以便编码器创建时使用。
status_t StagefrightRecorder::setVideoEncoder(video_encoder ve) {
ALOGV("setVideoEncoder: %d", ve);
if (ve < VIDEO_ENCODER_DEFAULT ||
ve >= VIDEO_ENCODER_LIST_END) {
ALOGE("Invalid video encoder: %d", ve);
return BAD_VALUE;
}
mVideoEncoder = ve;
return OK;
}
2.8 设置录制的视频编码比特率:setVideoEncodingBitRate()
每秒编码多少位bit
- Java层
public void setVideoEncodingBitRate(int bitRate) {
if (bitRate <= 0) {
throw new IllegalArgumentException("Video encoding bit rate is not positive");
}
setParameter("video-param-encoding-bitrate=" + bitRate);
}
-------------------
// 调用一个native方法
private native void setParameter(String nameValuePair);
- native层
setParameter()接口会针对对应的key做不同的处理。
// 解参数中的key通过“=”分解出来
status_t StagefrightRecorder::setParameters(const String8 ¶ms)
--------------------------
// 根据key去设置对应的参数
status_t StagefrightRecorder::setParameter(
const String8 &key, const String8 &value) {
ALOGV("setParameter: key (%s) => value (%s)", key.string(), value.string());
// 省略其他key的处理
} else if (key == "video-param-encoding-bitrate") {
int32_t video_bitrate;
if (safe_strtoi32(value.string(), &video_bitrate)) {
return setParamVideoEncodingBitRate(video_bitrate);
}
}
// 省略其他key的处理
} else {
ALOGE("setParameter: failed to find key %s", key.string());
}
return BAD_VALUE;
}
-------------------------
// 最终调用该方法进行了设置缓存
status_t StagefrightRecorder::setParamVideoEncodingBitRate(int32_t bitRate) {
ALOGV("setParamVideoEncodingBitRate: %d", bitRate);
if (bitRate <= 0) {
ALOGE("Invalid video encoding bit rate: %d", bitRate);
return BAD_VALUE;
}
// The target bit rate may not be exactly the same as the requested.
// It depends on many factors, such as rate control, and the bit rate
// range that a specific encoder supports. The mismatch between the
// the target and requested bit rate will NOT be treated as an error.
mVideoBitRate = bitRate;
return OK;
}
2.9 设置期望录制的视频帧率:setVideoFrameRate()
系统可能会调整
最终调用:缓存下来。
status_t StagefrightRecorder::setVideoFrameRate(int frames_per_second) {
ALOGV("setVideoFrameRate: %d", frames_per_second);
if ((frames_per_second <= 0 && frames_per_second != -1) ||
frames_per_second > kMaxHighSpeedFps) {
ALOGE("Invalid video frame rate: %d", frames_per_second);
return BAD_VALUE;
}
// Additional check on the frame rate will be performed later
mFrameRate = frames_per_second;
return OK;
}
2.10 设置视频的宽和高:setVideoSize()
最终调用:缓存下来做准备
status_t StagefrightRecorder::setVideoSize(int width, int height) {
ALOGV("setVideoSize: %dx%d", width, height);
if (width <= 0 || height <= 0) {
ALOGE("Invalid video size: %dx%d", width, height);
return BAD_VALUE;
}
// Additional check on the dimension will be performed later
mVideoWidth = width;
mVideoHeight = height;
return OK;
}
2.11 设置记录会话的最大持续时间:setMaxDuration()
单位:毫秒(ms)
static void
android_media_MediaRecorder_setMaxDuration(JNIEnv *env, jobject thiz, jint max_duration_ms)
{
ALOGV("setMaxDuration(%d)", max_duration_ms);
sp<MediaRecorder> mr = getMediaRecorder(env, thiz);
if (mr == NULL) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return;
}
char params[64];
// 设置对应的键值对类型拼接
sprintf(params, "max-duration=%d", max_duration_ms);
// 然后调用mr->setParameters
process_media_recorder_call(env, mr->setParameters(String8(params)), "java/lang/RuntimeException", "setMaxDuration failed.");
}
最终调用:缓存下来
status_t StagefrightRecorder::setParamMaxFileDurationUs(int64_t timeUs) {
ALOGV("setParamMaxFileDurationUs: %lld us", (long long)timeUs);
// This is meant for backward compatibility for MediaRecorder.java
if (timeUs <= 0) {
ALOGW("Max file duration is not positive: %lld us. Disabling duration limit.",
(long long)timeUs);
timeUs = 0; // Disable the duration limit for zero or negative values.
} else if (timeUs <= 100000LL) { // XXX: 100 milli-seconds
ALOGE("Max file duration is too short: %lld us", (long long)timeUs);
return BAD_VALUE;
}
if (timeUs <= 15 * 1000000LL) {
ALOGW("Target duration (%lld us) too short to be respected", (long long)timeUs);
}
mMaxFileDurationUs = timeUs;
return OK;
}
2.12 设置Surface显示预览:setPreviewDisplay()
该方法前面将MediaRecorder对象创建的时候讲过该变量在native中直接进行缓存下来了。
public void setPreviewDisplay(Surface sv) {
mSurface = sv;
}
----------------------
// 缓存如下:
struct fields_t {
jfieldID context;
// 缓存对应java层的【mSurface】,后面prepare时使用
jfieldID surface;
jmethodID post_event;
};
2.13 设置输出文件路径:setOutputFile()
setOutputFormat()之后执行,prepare()之前执行
最终调用该方法,将传入文件的描述符,该描述符是Linux系统操作文件的id。该方法有三种不同的参数:mPath,mFd,mFile。最终是通过prepare()方法设置到底层。
- setOutputFile() ==> prepare() ==> _setOutputFile()
status_t StagefrightRecorder::setOutputFile(int fd) {
ALOGV("setOutputFile: %d", fd);
if (fd < 0) {
ALOGE("Invalid file descriptor: %d", fd);
return -EBADF;
}
// start with a clean, empty file
ftruncate(fd, 0);
if (mOutputFd >= 0) {
::close(mOutputFd);
}
mOutputFd = dup(fd);
return OK;
}
2.14 准备录制:prepare()
- JNI层
static void
android_media_MediaRecorder_prepare(JNIEnv *env, jobject thiz)
{
ALOGV("prepare");
// 获取此前缓存的C++层实现的MediaRecorder对象实例
sp<MediaRecorder> mr = getMediaRecorder(env, thiz);
if (mr == NULL) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return;
}
// 获取此前缓存的java层Surface对象实例
jobject surface = env->GetObjectField(thiz, fields.surface);
if (surface != NULL) {
// 获取native层的Surface对象
const sp<Surface> native_surface = get_surface(env, surface);
// The application may misbehave and
// the preview surface becomes unavailable
if (native_surface.get() == 0) {
ALOGE("Application lost the surface");
jniThrowException(env, "java/io/IOException", "invalid preview surface");
return;
}
ALOGI("prepare: surface=%p", native_surface.get());
// 此处重要下面分析
if (process_media_recorder_call(env, mr->setPreviewSurface(native_surface->getIGraphicBufferProducer()), "java/lang/RuntimeException", "setPreviewSurface failed.")) {
return;
}
}
// 录像时一般都要求设置一个Surface对象用于预览影像,录音时才会用到这里
process_media_recorder_call(env, mr->prepare(), "java/io/IOException", "prepare failed.");
}
- Surface相关:【录制的时候需要预览界面窗口】
mRecorder.setPreviewDisplay(mSurfaceHolder.getSurface());
mr->setPreviewSurface(native_surface->getIGraphicBufferProducer());
这里的IGraphicBufferProducer就是APP与BufferQueue【数据源缓冲队列】的重要桥梁,GraphicBufferProducer承担着单个应用进程中的UI显示需求。GraphicBufferProducer负责与SurfaceFlinger交互,其向BufferQueue获取Buffer并且填充UI信息完成后通知SurfaceFlinger进行显示。
最终调用:
status_t StagefrightRecorder::setPreviewSurface(const sp<IGraphicBufferProducer> &surface) {
ALOGV("setPreviewSurface: %p", surface.get());
mPreviewSurface = surface;
return OK;
}
并将该实例缓存下来,后续显示图像使用。
不过后续还会在StagefrightRecorder::start()中调用真正的prepare方法(prepareInternal),因此这里分析下为以下最终方法:
status_t StagefrightRecorder::prepareInternal() {
ALOGV("prepare");
if (mOutputFd < 0) {
ALOGE("Output file descriptor is invalid");
return INVALID_OPERATION;
}
// Get UID and PID here for permission checking
mClientUid = IPCThreadState::self()->getCallingUid();
mClientPid = IPCThreadState::self()->getCallingPid();
status_t status = OK;
// 此处根据setOutputFormat()设置的想要的输出文件格式,来初始化对应的录制控制器,此处分析MPEG4,其他同理分析即可。
switch (mOutputFormat) {
case OUTPUT_FORMAT_DEFAULT:
case OUTPUT_FORMAT_THREE_GPP:
case OUTPUT_FORMAT_MPEG_4:
case OUTPUT_FORMAT_WEBM:
// 初始化设置MPEG4或者WEB格式的开始录制
status = setupMPEG4orWEBMRecording();
break;
case OUTPUT_FORMAT_AMR_NB:
case OUTPUT_FORMAT_AMR_WB:
// 初始化设置AMR格式的开始录制
status = setupAMRRecording();
break;
case OUTPUT_FORMAT_AAC_ADIF:
case OUTPUT_FORMAT_AAC_ADTS:
// 初始化设置AAC格式的开始录制
status = setupAACRecording();
break;
case OUTPUT_FORMAT_RTP_AVP:
status = setupRTPRecording();
break;
case OUTPUT_FORMAT_MPEG2TS:
status = setupMPEG2TSRecording();
break;
default:
ALOGE("Unsupported output file format: %d", mOutputFormat);
status = UNKNOWN_ERROR;
break;
}
ALOGV("Recording frameRate: %d captureFps: %f",
mFrameRate, mCaptureFps);
return status;
}
status_t StagefrightRecorder::setupMPEG4orWEBMRecording() {
// 这是 sp<MediaWriter> mWriter; 对象,此处对应是MPEG4Writer对象实例
mWriter.clear();
mTotalBitRate = 0;
status_t err = OK;
sp<MediaWriter> writer;
sp<MPEG4Writer> mp4writer;
if (mOutputFormat == OUTPUT_FORMAT_WEBM) {
writer = new WebmWriter(mOutputFd);
} else {
// 此处只分析MPEG4 格式文件,因此运行这里
writer = mp4writer = new MPEG4Writer(mOutputFd);
}
if (mVideoSource < VIDEO_SOURCE_LIST_END) {
// 此处是默认设置编码器 如有有必要的话,里面会有一些配置参数的判断是否为默认值进行设置的,此处直接是采用H.264(VIDEO_ENCODER_H264)
setDefaultVideoEncoderIfNecessary();
sp<MediaSource> mediaSource;
/** 该方法主要是根据setVideoSource视频源设置枚举值进行创建该对象来读取Camera中捕获的数据源,当然只有音频录制时不会创建该对象的,也不需要,并且此对象实际上是返回的CameraSource实例:
class CameraSource : public MediaSource, public MediaBufferObserver
因此可以对Camera中捕获的视频源数据进行处理。
最终会根据一个【mCaptureFpsEnable】参数来判断是否创建时使用光流逝录影CameraSourceTimeLapse【继承CameraSource】,默认不使用。
**/
err = setupMediaSource(&mediaSource);
if (err != OK) {
return err;
}
/**
此处初始化了一个视频编码器,传入了一个CameraSource类型的媒体源数据对象,
该方法中有:sp<MetaData> meta = cameraSource->getFormat(); 该调用即是从Camera捕获的视频源数据中获取视频源数据的长宽等一些该视频格式信息。然后又根据一些变量值处理来设置视频最终的格式数据等。
然后: sp<MediaCodecSource> encoder = MediaCodecSource::Create(
mLooper, format, cameraSource, mPersistentSurface, flags);
此句代码则根据上面的视频格式和配置参数数据等真正创建了最终的对应编码器:
struct MediaCodecSource : public MediaSource, public MediaBufferObserver;
此处能够看得出来刚好与上面的CameraSource视频数据来源对象的接口一致的,这样在MediaCodecSource里面就能与CameraSource做消息传递交互和数据的获取等操作了,这样就把视频源数据和视频编码器绑定了。然后MediaCodec就可以不断的从CameraSource中拉取视频源数据处理了。
这里面还有一套ALooper、AHandler和AMessage机制,即可以看做是java层的那套Handler的消息机制呗,通过回调来处理事件和数据等,如此就可以异步处理了。
具体实现逻辑是:
MediaCodecSource里面的Handler机制,有一个Puller继承AHandler的实例,该实例拥有了CameraSource视频来源对象实例,如此就能通过控制和调用其对应方法进行控制CameraSource的行为并且也可以读取其视频源数据,如:
status_t err = mSource->start(static_cast<MetaData *>(obj.get()));通知开始录制。
mSource->stop();通知结束停止录制。
status_t err = mSource->read(&mbuf);获取CameraSource里面缓存的Buffer源数据来进行编码处理。
**/
sp<MediaCodecSource> encoder;
err = setupVideoEncoder(mediaSource, &encoder);
if (err != OK) {
return err;
}
/**
然后将编码器encoder添加到MPEG4Writer的writer对象中,
Track *track = new Track(this, source /** encoder实例, 1 + mTracks.size()); 并且将writer和encoder都放入了一个Track中,如此就能使用Track来对writer、encoder、CameraSource进行操作处理的能力,最终写入编码后的数据也是由track来完成的。该track会被放入List<Track *> mTracks;中缓存起来。每个音频源和视频源都对应有一个Track对象实例来处理。**/
writer->addSource(encoder);
// 赋值缓存给全局视频源编码器
mVideoEncoderSource = encoder;
// 总的比特率
mTotalBitRate += mVideoBitRate;
}
if (mOutputFormat != OUTPUT_FORMAT_WEBM) {
// Audio source is added at the end if it exists.
// This help make sure that the "recoding" sound is suppressed for
// camcorder applications in the recorded files.
// TODO Audio source is currently unsupported for webm output; vorbis encoder needed.
// disable audio for time lapse recording
bool disableAudio = mCaptureFpsEnable && mCaptureFps < mFrameRate;
if (!disableAudio && mAudioSource != AUDIO_SOURCE_CNT) {
// 创建了一个音频源编码器,与上面那个视频源差不多,也是放入了writer对象中,并且新放入了一个新的Track对象中来单独对音频源数据处理。
err = setupAudioEncoder(writer);
if (err != OK) return err;
mTotalBitRate += mAudioBitRate;
}
if (mCaptureFpsEnable) {
mp4writer->setCaptureRate(mCaptureFps);
}
if (mInterleaveDurationUs > 0) {
mp4writer->setInterleaveDuration(mInterleaveDurationUs);
}
if (mLongitudex10000 > -3600000 && mLatitudex10000 > -3600000) {
mp4writer->setGeoData(mLatitudex10000, mLongitudex10000);
}
}
if (mMaxFileDurationUs != 0) {
writer->setMaxFileDuration(mMaxFileDurationUs);
}
if (mMaxFileSizeBytes != 0) {
writer->setMaxFileSize(mMaxFileSizeBytes);
}
if (mVideoSource == VIDEO_SOURCE_DEFAULT
|| mVideoSource == VIDEO_SOURCE_CAMERA) {
mStartTimeOffsetMs = mEncoderProfiles->getStartTimeOffsetMs(mCameraId);
} else if (mVideoSource == VIDEO_SOURCE_SURFACE) {
// surface source doesn't need large initial delay
// 设置开始时间偏移200ms
mStartTimeOffsetMs = 100;
}
if (mStartTimeOffsetMs > 0) {
// 传入进write
writer->setStartTimeOffsetMs(mStartTimeOffsetMs);
}
// 设置监听回调对象,通过该对象C++层就可以把native事件一路回调到JAVA层
writer->setListener(mListener);
mWriter = writer;
return OK;
}
2.15 开始录制:start()
开始录制是比较复杂的,如下处理分析:
status_t StagefrightRecorder::start() {
ALOGV("start");
Mutex::Autolock autolock(mLock);
if (mOutputFd < 0) {
ALOGE("Output file descriptor is invalid");
return INVALID_OPERATION;
}
status_t status = OK;
if (mVideoSource != VIDEO_SOURCE_SURFACE) {
// 这句是视频源即录制时需要调用准备过程的状态,因为前面在视频准备阶段并未真正执行,此处就需要执行了,该方法上面已经分析了
status = prepareInternal();
if (status != OK) {
return status;
}
}
if (mWriter == NULL) {
ALOGE("File writer is not avaialble");
return UNKNOWN_ERROR;
}
switch (mOutputFormat) {
case OUTPUT_FORMAT_DEFAULT:
case OUTPUT_FORMAT_THREE_GPP:
case OUTPUT_FORMAT_MPEG_4:
case OUTPUT_FORMAT_WEBM:
{
bool isMPEG4 = true;
if (mOutputFormat == OUTPUT_FORMAT_WEBM) {
isMPEG4 = false;
}
// 创建一个数据源格式的元数据对象,用以保存MPEG4格式的元数据即格式数据如视频开始时间、视频文件类型,视频总比特率等,以此来获取视频数据的特定格式信息。
sp<MetaData> meta = new MetaData;
setupMPEG4orWEBMMetaData(&meta);
/** 然后调用MPEG4的start函数并传入该meta对象,以此开始录制音视频。
内部start方法处理基本如下:封装box结构的视频格式数据,然后
调用非常重要的 【startWriterThread()】开启另一个线程来进行不断从CameraSource的read函数中获取从Driver层返回的音视频源数据,分别在各自track中先处理。
然后还会调用【startTracks(param);】内部会遍历所有的track即音视频轨道的音视频源数据追踪对象让其各自都立即开始,【for (List<Track *>::iterator it = mTracks.begin();
it != mTracks.end(); ++it) {
status_t err = (*it)->start(params);】
并且各自的Track实例中都会有
status_t err = mSource->start(meta.get());如此操作,而这个source就是此前分析过的encoder编解码器,如此就会最终调用CameraSource.start真正的开始录制。
然后通过C++层的Handler消息机制进行异步事件和数据的传递等操作*/
status = mWriter->start(meta.get());
break;
}
// 省略部分代码
return status;
}
总结分析:
MPEG4Writer读取的是最终encoder(OMXCodec)编码器进过编码后的数据,而encoder将CameraSource从driver层传递过来的源音视频数据进行编码等操作。
最终核心处理类是MPEG4Writer和Track对象,Track内部最终通过dump和MPEG4Writer调用write函数,将最终编码后的数据写入了文件,调用了文件写入函数write函数【::write(fd, result.string(), result.size());】处理的。因此对于音视频两个轨道的元数据,进行了分Track进行记录和处理的:
for (List<Track *>::iterator it = mTracks.begin();
it != mTracks.end(); ++it) {
(*it)->dump(fd, args);
}
直接遍历进行了数据在文件中进行合并了。
调用关系:
StagefrightRecorder::dump()->MPEG4Writer.dump()->遍历Track.dump(),最终写入文件数据。
2.16 暂停/恢复录制:pause()/resume()
status_t StagefrightRecorder::pause() {
ALOGV("pause");
if (!mStarted) {
return INVALID_OPERATION;
}
// Already paused --- no-op.
if (mPauseStartTimeUs != 0) {
return OK;
}
mPauseStartTimeUs = systemTime() / 1000;
sp<MetaData> meta = new MetaData;
meta->setInt64(kKeyTime, mPauseStartTimeUs);
if (mStartedRecordingUs != 0) {
// should always be true
int64_t recordingUs = mPauseStartTimeUs - mStartedRecordingUs;
mDurationRecordedUs += recordingUs;
mStartedRecordingUs = 0;
}
if (mAudioEncoderSource != NULL) {
// 最终会调用音频编码器的pause
mAudioEncoderSource->pause();
}
if (mVideoEncoderSource != NULL) {
// 最终会调用视频编码器的pause
mVideoEncoderSource->pause(meta.get());
}
return OK;
}
2.18 停止录制:stop()
status_t StagefrightRecorder::stop() {
ALOGV("stop");
Mutex::Autolock autolock(mLock);
status_t err = OK;
// 省略部分代码
if (mWriter != NULL) {
// 最终调用了MPEG4的stop()方法,然后调用了reset()方法,先停止所有的Track,然后stopWriterThread();执行将读取线程停止,即可停止录制了,并release释放资源
err = mWriter->stop();
mWriter.clear();
}
// 省略部分代码
return err;
}
2.19 重置Recorder:reset()
与上stop差不多,停止write线程,并release释放资源。
2.20 释放Recorder资源:release()
关闭文件流,释放缓存buffer资源等
status_t MediaRecorderClient::release()
{
ALOGV("release");
Mutex::Autolock lock(mLock);
if (mRecorder != NULL) {
delete mRecorder;
mRecorder = NULL;
wp<MediaRecorderClient> client(this);
mMediaPlayerService->removeMediaRecorderClient(client);
}
clearDeathNotifiers_l();
return NO_ERROR;
}
---------------------------
void MPEG4Writer::release() {
close(mFd);
mFd = -1;
mInitCheck = NO_INIT;
mStarted = false;
free(mInMemoryCache);
mInMemoryCache = NULL;
}
3.AMRWriter
录音数据存储到不同的文件,就对应有对应的合成器。其中amr文件格式的即使用AMRWriter。
3.1 prepareInternal()
根据mMediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.AMR_NB)设置会有一下处理,去实例化AMRWriter。
case OUTPUT_FORMAT_AMR_NB:
case OUTPUT_FORMAT_AMR_WB:
status = setupAMRRecording();
break;
--------------
status_t StagefrightRecorder::setupAMRRecording() {
CHECK(mOutputFormat == OUTPUT_FORMAT_AMR_NB ||
mOutputFormat == OUTPUT_FORMAT_AMR_WB);
if (mOutputFormat == OUTPUT_FORMAT_AMR_NB) {
if (mAudioEncoder != AUDIO_ENCODER_DEFAULT &&
mAudioEncoder != AUDIO_ENCODER_AMR_NB) {
ALOGE("Invalid encoder %d used for AMRNB recording",
mAudioEncoder);
return BAD_VALUE;
}
} else { // mOutputFormat must be OUTPUT_FORMAT_AMR_WB
if (mAudioEncoder != AUDIO_ENCODER_AMR_WB) {
ALOGE("Invlaid encoder %d used for AMRWB recording",
mAudioEncoder);
return BAD_VALUE;
}
}
// 创建AMRWriter
mWriter = new AMRWriter(mOutputFd);
return setupRawAudioRecording();
}
3.1.1 setupRawAudioRecording()
status_t StagefrightRecorder::setupRawAudioRecording() {
// 省略部分代码
sp<MediaCodecSource> audioEncoder = createAudioSource();
if (audioEncoder == NULL) {
return UNKNOWN_ERROR;
}
CHECK(mWriter != 0);
mWriter->addSource(audioEncoder);
mAudioEncoderSource = audioEncoder;
// 设置时长和大小
if (mMaxFileDurationUs != 0) {
mWriter->setMaxFileDuration(mMaxFileDurationUs);
}
if (mMaxFileSizeBytes != 0) {
mWriter->setMaxFileSize(mMaxFileSizeBytes);
}
mWriter->setListener(mListener);// 设置监听器
return OK;
}
3.1.2 createAudioSource()
主要是设置format的参数,已经创建数据源audioSource,然后根据这来创建编码器。
sp<MediaCodecSource> StagefrightRecorder::createAudioSource() {
int32_t sourceSampleRate = mSampleRate;
// 省略部分代码
// 创建AudioSource,音频数据源,会去创建AudioRecord
sp<AudioSource> audioSource =
new AudioSource(
mAudioSource,
mOpPackageName,
sourceSampleRate,
mAudioChannels,
mSampleRate,
mClientUid,
mClientPid,
mSelectedDeviceId);
// 初始化
status_t err = audioSource->initCheck();
if (err != OK) {
ALOGE("audio source is not initialized");
return NULL;
}
sp<AMessage> format = new AMessage;
// setAudioEncoder设置的编码格式,对应设置元数据
switch (mAudioEncoder) {
case AUDIO_ENCODER_AMR_NB:
case AUDIO_ENCODER_DEFAULT:
format->setString("mime", MEDIA_MIMETYPE_AUDIO_AMR_NB);
break;
case AUDIO_ENCODER_AMR_WB:
format->setString("mime", MEDIA_MIMETYPE_AUDIO_AMR_WB);
break;
case AUDIO_ENCODER_AAC:
format->setString("mime", MEDIA_MIMETYPE_AUDIO_AAC);
format->setInt32("aac-profile", OMX_AUDIO_AACObjectLC);
break;
case AUDIO_ENCODER_HE_AAC:
format->setString("mime", MEDIA_MIMETYPE_AUDIO_AAC);
format->setInt32("aac-profile", OMX_AUDIO_AACObjectHE);
break;
case AUDIO_ENCODER_AAC_ELD:
format->setString("mime", MEDIA_MIMETYPE_AUDIO_AAC);
format->setInt32("aac-profile", OMX_AUDIO_AACObjectELD);
break;
default:
ALOGE("Unknown audio encoder: %d", mAudioEncoder);
return NULL;
}
// 省略部分代码
// 设置参数
format->setInt32("max-input-size", maxInputSize);
format->setInt32("channel-count", mAudioChannels);
format->setInt32("sample-rate", mSampleRate);
format->setInt32("bitrate", mAudioBitRate);
if (mAudioTimeScale > 0) {
format->setInt32("time-scale", mAudioTimeScale);
}
format->setInt32("priority", 0 /* realtime */);
// 创建编码器
sp<MediaCodecSource> audioEncoder =
MediaCodecSource::Create(mLooper, format, audioSource);
sp<AudioSystem::AudioDeviceCallback> callback = mAudioDeviceCallback.promote();
if (mDeviceCallbackEnabled && callback != 0) {
audioSource->addAudioDeviceCallback(callback);
}
mAudioSourceNode = audioSource;
if (audioEncoder == NULL) {
ALOGE("Failed to create audio encoder");
}
return audioEncoder;
}
3.1.3 new AudioSource()
AudioSource链接MediaCodecSource和StagefrightRecorder
AudioSource::AudioSource(
audio_source_t inputSource, const String16 &opPackageName,
uint32_t sampleRate, uint32_t channelCount, uint32_t outSampleRate,
uid_t uid, pid_t pid, audio_port_handle_t selectedDeviceId)
: mStarted(false),
mSampleRate(sampleRate),
mOutSampleRate(outSampleRate > 0 ? outSampleRate : sampleRate),
mTrackMaxAmplitude(false),
mStartTimeUs(0),
mStopSystemTimeUs(-1),
mLastFrameTimestampUs(0),
mMaxAmplitude(0),
mPrevSampleTimeUs(0),
mInitialReadTimeUs(0),
mNumFramesReceived(0),
mNumFramesSkipped(0),
mNumFramesLost(0),
mNumClientOwnedBuffers(0),
mNoMoreFramesToRead(false) {
// 省略部分代码
// 获取minFrameCount,用于创建AudioRecord
status_t status = AudioRecord::getMinFrameCount(&minFrameCount,
sampleRate,
AUDIO_FORMAT_PCM_16_BIT,
audio_channel_in_mask_from_count(channelCount));
// 省略部分代码
mRecord = new AudioRecord(
inputSource, sampleRate, AUDIO_FORMAT_PCM_16_BIT,
audio_channel_in_mask_from_count(channelCount),
opPackageName,
(size_t) (bufCount * frameCount),
AudioRecordCallbackFunction,
this,
frameCount /*notificationFrames*/,
AUDIO_SESSION_ALLOCATE,
AudioRecord::TRANSFER_DEFAULT,
AUDIO_INPUT_FLAG_NONE,
uid,
pid,
NULL /*pAttributes*/,
selectedDeviceId);
mInitCheck = mRecord->initCheck();
if (mInitCheck != OK) {
mRecord.clear();
}
} else {
mInitCheck = status;
}
}
3.1.4 new AudioRecord()
new AudioRecord()主要工作在set()方法中。参考下面的流程分析:
3.2 start()
StagefrightRecorder::start()见2.15小节,然后会直接调用mWriter->start()
status_t AMRWriter::start(MetaData * /* params */) {
// 省略部分代码:播放状态不在start的处理
status_t err = mSource->start(); // audioEncoder -> MediaCodecSource
if (err != OK) {
return err;
}
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
mReachedEOS = false;
mDone = false;
pthread_create(&mThread, &attr, ThreadWrapper, this);
pthread_attr_destroy(&attr);
mStarted = true;
return OK;
}
3.2.1 start()流程
mSource->start() ==》MediaCodecSource::start() ==> kWhatStart ==> onStart() ==> MediaCodecSource::Puller::start() ==> AudioSource::start() ==>
AudioRecord.start() ==> mAudioRecord.start ==> RecordHandle::start ==> spRecordThread::RecordTrack start(Threads.cpp) ==> AudioSystem::startInput() ==>
AudioPolicyService::startInput() ==> AudioPolicyManager::startInput()
status_t AudioPolicyManager::startInput(audio_io_handle_t input,
audio_session_t session,
bool silenced,
concurrency_type__mask_t *concurrency)
{
//获取输入设备
sp<AudioInputDescriptor> inputDesc = mInputs.valueAt(index);
// 确保我们从正确的静音状态开始
audioSession->setSilenced(silenced);
// 在调用下面的getNewInputDevice()之前增加活动计数,因为只考虑活动会话进行设备选择
audioSession->changeActiveCount(1);
// Routing?
mInputRoutes.incRouteActivity(session);
if (audioSession->activeCount() == 1 || mInputRoutes.getAndClearRouteChanged(session)) {
// 如果从主硬件模块上的麦克风开始捕获,则指示对声音触发服务进行了有效捕获
audio_devices_t device = getNewInputDevice(inputDesc);
setInputDevice(input, device, true /* force */);
//计数加1
status_t status = inputDesc->start();
if (status != NO_ERROR) {
mInputRoutes.decRouteActivity(session);
audioSession->changeActiveCount(-1);
return status;
}
//....
}
return NO_ERROR;
}
3.2.2 pthread_create
通过此线程去读取编码后的数据。
pthread_create函数
- 简介:pthread_create是UNIX环境创建线程的函数
- 头文件:#include <pthread.h>
- 函数声明:
int pthread_create(pthread_t* restrict tidp,const pthread_attr_t* restrict_attr,void* (start_rtn)(void),void *restrict arg);
- 输入参数:
(1)tidp:事先创建好的pthread_t类型的参数。成功时tidp指向的内存单元被设置为新创建线程的线程ID。
(2)attr:用于定制各种不同的线程属性。
(3)start_rtn:新创建线程从此函数开始运行。无参数是arg设为NULL即可。
(4)arg:start_rtn函数的参数。无参数时设为NULL即可。有参数时输入参数的地址。当多于一个参数时应当使用结构体传入。
- 返回值:成功返回0,否则返回错误码
status_t AMRWriter::threadFunc() {
mEstimatedDurationUs = 0;
mEstimatedSizeBytes = 0;
bool stoppedPrematurely = true;
int64_t previousPausedDurationUs = 0;
int64_t maxTimestampUs = 0;
status_t err = OK;
prctl(PR_SET_NAME, (unsigned long)"AMRWriter", 0, 0, 0);
while (!mDone) { // 一直循环读取,直到stop停止
MediaBufferBase *buffer;
err = mSource->read(&buffer);//从MediaCodecSource中读取数据
// 省略部分代码
// 将读取到的数据写入到文件中
ssize_t n = write(mFd,
(const uint8_t *)buffer->data() + buffer->range_offset(),
buffer->range_length());
//省略部分代码
}
if ((err == OK || err == ERROR_END_OF_STREAM) && stoppedPrematurely) {
err = ERROR_MALFORMED;
}
close(mFd);
mFd = -1;
mReachedEOS = true;
if (err == ERROR_END_OF_STREAM) {
return OK;
}
return err;
}
3.2.3 read()
pthread_create线程启动后会循环读取数据,mSource->read()。
(1)从AudioSource::read()获取数据,并加入到编码队列
MediaCodecSource::Puller::start() ==> schedulePull() ==> kWhatPull ==> AudioSource::read() ==>
pushBuffer() 加入队列准备编码
(2)从编码后的队列获取已经编码的数据
MediaCodecSource::read() ==> mBufferQueue(编码后的数据队列)
(3)AudioSource::read()的数据来源,最终从hal层来的数据
AudioRecordThread::threadLoop() ==> AudioRecord::processAudioBuffer() ==> AudioRecord:: obtainBuffer() ==> AudioSource::AudioRecordCallbackFunction ==> dataCallback() ==> queueInputBuffer_l() ==> mBuffersReceived.push_back(buffer)
==> AudioSource::read() ==> mBuffersReceived接收到的数据存储
- processAudioBuffer:从AudioFlinger共享内存获取数据
- AudioRecordCallbackFunction是在new AudioRecord的时候设置进去的一个参数,对应参数为cbf(
mCbf)。
mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer);
4.音频数据传输
(1)AudioFlinger 从Hal 读取数据。
(2)AudioRecord从AudioFlinger获取数据。
4.1 AudioHal -> AudioFlinger
AudioFlinger服务通过RecordThread读取数据。threadLoop函数比较复杂,直接看重点。
- mInput->stream->read 从hal层读取数据。
- activeTrack->getNextBuffer(&activeTrack->mSink)获取下一个可用共享Buffer
- activeTrack->mRecordBufferConverter->convert(
activeTrack->mSink.raw, activeTrack->mResamplerBufferProvider, framesOut)
将音频数据拷贝到共享内存的Buffer中。
- Threads.cpp#bool AudioFlinger::PlaybackThread::threadLoop()
// otherwise use the HAL / AudioStreamIn directly
} else {
ATRACE_BEGIN("read");
size_t bytesRead;
status_t result = mInput->stream->read(
(uint8_t*)mRsmpInBuffer + rear * mFrameSize, mBufferSize, &bytesRead);
ATRACE_END();
if (result < 0) {
framesRead = result;
} else {
framesRead = bytesRead / mFrameSize;
}
}
mInput->stream: AudioStreamIn,最终对接到hal层的adev_open_input_stream。
mInput->stream->read() ==> audio_hw_hal.cpp#in_read() ==> AudioALSAStreamIn.cpp#read()
4.2 AudioFlinger -> AudioRecord
AudioFlinger 接收到的音频数据,通过共享内存传给AudioRecord。
(1)read
AudioRecord.java read函数,会调到AudioRecord::obtainBuffer函数。
AudioRecord::obtainBuffer
==》status = proxy->obtainBuffer(&buffer, requested, elapsed);
这里proxy对应AudioRecordClientProxy类型,与AudioFlinger侧的AudioRecordServerProxy对应。
通过两端代理及共享内存,从而客户端将音频数据读取到App层,供客户端后续使用。