前言
前面的文章中,我们通过Activity的启动过程,通过这个系统级的API深入学习了Android源码。本篇文章将从插件级的API--Camera进行入手研究Android的Camera源码,以及整个Android架构是如何运作的。
概述
下面先给出Camera的API的调用时序图。我们使用Camera的时候,先通过open方法打开照相机,然后调用startPreview开启预览,最后通过takePicture方法进行拍照。下面以open、takePicture这两个方法为例进行入手分析。
Camera源码分析
下面主要通过Camera的open方法为例子来分析Camera源码,其余方法大同小异,不再赘述了。主要是按照上面的时序图进行一步一步分析的。
我们一般通过下面的方法获取Camera实例:
/**
* A safe way to get an instance of the Camera object.
*/
public android.hardware.Camera getCameraInstance() {
android.hardware.Camera c = null;
try {
c = android.hardware.Camera.open(); // attempt to get a Camera instance
} catch (Exception e) {
// Camera is not available (in use or does not exist)
}
return c; // returns null if camera is unavailable
}
open方法是一个重载方法,如果不指定Camera ID的话,默认是打开后置摄像头(时序1):
public static Camera open(int cameraId) {
return new Camera(cameraId);
}
public static Camera open() {
int numberOfCameras = getNumberOfCameras();
CameraInfo cameraInfo = new CameraInfo();
for (int i = 0; i < numberOfCameras; i++) {
getCameraInfo(i, cameraInfo);
if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
return new Camera(i);
}
}
return null;
}
Camera的构造函数如下:
Camera(int cameraId) {
int err = cameraInitNormal(cameraId);
if (checkInitErrors(err)) {
if (err == -EACCES) {
throw new RuntimeException("Fail to connect to camera service");
} else if (err == -ENODEV) {
throw new RuntimeException("Camera initialization failed");
}
// Should never hit this.
throw new RuntimeException("Unknown camera error");
}
}
这时候核心是通过native_setup方法初始化摄像头:
private int cameraInitNormal(int cameraId) {
return cameraInitVersion(cameraId, CAMERA_HAL_API_VERSION_NORMAL_CONNECT);
}
private int cameraInitVersion(int cameraId, int halVersion) {
mShutterCallback = null;
mRawImageCallback = null;
mJpegCallback = null;
mPreviewCallback = null;
mPostviewCallback = null;
mUsingPreviewAllocation = false;
mZoomListener = null;
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(this, looper);
} else {
mEventHandler = null;
}
return native_setup(new WeakReference<Camera>(this), cameraId, halVersion,
ActivityThread.currentOpPackageName());
}
其中,native_setup是一个native方法,具体实现是在源码中的frameworks/base/core/jni/android_hardware_Camera.cpp中(时序2)。
在android_hardware_Camera.cpp中可能直接找不到native_setup方法,因为这是通过camMethods把方法native_setup与android_hardware_Camera_native_setup关联起来。(在Android启动的时候通过register_android_hardware_Camera中RegisterMethodsOrDie注册好的)
static const JNINativeMethod camMethods[] = {
{ "getNumberOfCameras",
"()I",
(void *)android_hardware_Camera_getNumberOfCameras },
{ "_getCameraInfo",
"(ILandroid/hardware/Camera$CameraInfo;)V",
(void*)android_hardware_Camera_getCameraInfo },
{ "native_setup",
"(Ljava/lang/Object;IILjava/lang/String;)I",
(void*)android_hardware_Camera_native_setup },
{ "native_release",
"()V",
(void*)android_hardware_Camera_release },
{ "setPreviewSurface",
"(Landroid/view/Surface;)V",
(void *)android_hardware_Camera_setPreviewSurface },
{ "setPreviewTexture",
"(Landroid/graphics/SurfaceTexture;)V",
(void *)android_hardware_Camera_setPreviewTexture },
{ "setPreviewCallbackSurface",
"(Landroid/view/Surface;)V",
(void *)android_hardware_Camera_setPreviewCallbackSurface },
{ "startPreview",
"()V",
(void *)android_hardware_Camera_startPreview },
{ "_stopPreview",
"()V",
(void *)android_hardware_Camera_stopPreview },
{ "previewEnabled",
"()Z",
(void *)android_hardware_Camera_previewEnabled },
{ "setHasPreviewCallback",
"(ZZ)V",
(void *)android_hardware_Camera_setHasPreviewCallback },
{ "_addCallbackBuffer",
"([BI)V",
(void *)android_hardware_Camera_addCallbackBuffer },
{ "native_autoFocus",
"()V",
(void *)android_hardware_Camera_autoFocus },
{ "native_cancelAutoFocus",
"()V",
(void *)android_hardware_Camera_cancelAutoFocus },
{ "native_takePicture",
"(I)V",
(void *)android_hardware_Camera_takePicture },
{ "native_setParameters",
"(Ljava/lang/String;)V",
(void *)android_hardware_Camera_setParameters },
{ "native_getParameters",
"()Ljava/lang/String;",
(void *)android_hardware_Camera_getParameters },
{ "reconnect",
"()V",
(void*)android_hardware_Camera_reconnect },
{ "lock",
"()V",
(void*)android_hardware_Camera_lock },
{ "unlock",
"()V",
(void*)android_hardware_Camera_unlock },
{ "startSmoothZoom",
"(I)V",
(void *)android_hardware_Camera_startSmoothZoom },
{ "stopSmoothZoom",
"()V",
(void *)android_hardware_Camera_stopSmoothZoom },
{ "setDisplayOrientation",
"(I)V",
(void *)android_hardware_Camera_setDisplayOrientation },
{ "_enableShutterSound",
"(Z)Z",
(void *)android_hardware_Camera_enableShutterSound },
{ "_startFaceDetection",
"(I)V",
(void *)android_hardware_Camera_startFaceDetection },
{ "_stopFaceDetection",
"()V",
(void *)android_hardware_Camera_stopFaceDetection},
{ "enableFocusMoveCallback",
"(I)V",
(void *)android_hardware_Camera_enableFocusMoveCallback},
};
// Get all the required offsets in java class and register native functions
int register_android_hardware_Camera(JNIEnv *env)
{
//。。。省略
// Register native functions
return RegisterMethodsOrDie(env, "android/hardware/Camera", camMethods, NELEM(camMethods));
}
这里说明一下这里的JNI实现方式(一般情况下我们是通过包名_方法名的方式去关联Native函数的,这里有所不同),注意实现的过程如下:
Java层声明了Native方法。
Native层实现Java层声明的Native方法,在Native层可以调用底层库或者回调Java方法,最终编译成动/静态链接库。
通过下面的方式去关联Native与Java方法。
-
在初始化的时候(例如JNI加载或者Android系统启动的时候)调用RegisterMethodsOrDie去注册这个camMethods[]。
static const JNINativeMethod camMethods[] = { //一、Java层声明的Native函数名 //二、Java函数的签名,依据JNI的签名规则 //三、函数指针,指向Native层的实现方法 { "native_setup", "(Ljava/lang/Object;IILjava/lang/String;)I", (void*)android_hardware_Camera_native_setup } };
因此我们关心的是android_hardware_Camera_native_setup这个方法:
// connect to camera service
static jint android_hardware_Camera_native_setup(JNIEnv *env, jobject thiz,
jobject weak_this, jint cameraId, jint halVersion, jstring clientPackageName)
{
sp<Camera> camera;
if (halVersion == CAMERA_HAL_API_VERSION_NORMAL_CONNECT) {
// Default path: hal version is don't care, do normal camera connect.
camera = Camera::connect(cameraId, clientName,
Camera::USE_CALLING_UID, Camera::USE_CALLING_PID);
}
}
这里最核心是通过调用Camera::connect方法去连接摄像头(时序3)。
Camera类定义在frameworks/av/camera/include/camera/Camera.h头文件中:
class Camera :
public CameraBase<Camera>,
public ::android::hardware::BnCameraClient
{
};
可见,Camera类是继承CameraBase<Camera>以及BnCameraClient,Camera很多的具体实现例如connect是在frameworks/av/camera/CameraBase.cpp中实现的(时序4)(这涉及到了模板类,相关知识在这里不再赘述,详细请参考笔者的NDK的系列的文章):
template <typename TCam, typename TCamTraits>
sp<TCam> CameraBase<TCam, TCamTraits>::connect(int cameraId,
const String16& clientPackageName,
int clientUid, int clientPid)
{
ALOGV("%s: connect", __FUNCTION__);
sp<TCam> c = new TCam(cameraId);
sp<TCamCallbacks> cl = c;
const sp<::android::hardware::ICameraService> cs = getCameraService();
binder::Status ret;
if (cs != nullptr) {
TCamConnectService fnConnectService = TCamTraits::fnConnectService;
ret = (cs.get()->*fnConnectService)(cl, cameraId, clientPackageName, clientUid,
clientPid, /*out*/ &c->mCamera);
}
if (ret.isOk() && c->mCamera != nullptr) {
IInterface::asBinder(c->mCamera)->linkToDeath(c);
c->mStatus = NO_ERROR;
} else {
ALOGW("An error occurred while connecting to camera %d: %s", cameraId,
(cs == nullptr) ? "Service not available" : ret.toString8().string());
c.clear();
}
return c;
}
这里先通过getCameraService()方法获取相机的服务(时序5),具体实现如下(已经省略一部分代码):
// establish binder interface to camera service
template <typename TCam, typename TCamTraits>
const sp<::android::hardware::ICameraService> CameraBase<TCam, TCamTraits>::getCameraService()
{
Mutex::Autolock _l(gLock);
if (gCameraService.get() == 0) {
//第一步:获取ServiceManager
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
//第二步:通过ServiceManager获取CameraService(此处获取的是IBinder类型接口)
binder = sm->getService(String16(kCameraServiceName));
} while(true);
//第三步:将IBinder类型接口转换为ICameraService
gCameraService = interface_cast<::android::hardware::ICameraService>(binder);
}
return gCameraService;
}
connect中获取CameraService之后,然后通过fnConnectService进行连接服务。其中fnConnectService是一个函数指针,指向的是BpCameraService::connect方法(时序6)。
然后借助BpCameraService以及BnCameraService进行进程间通信,即Application Framework与System Service进行通信(时序7、8)。由于CameraService继承实现了BnCameraService,因此最终会调到frameworks/av/services/camera/libcameraserviceCameraService/CameraService.cpp里面的connect方法(时序9):
Status CameraService::connect(
const sp<ICameraClient>& cameraClient,
int cameraId,
const String16& clientPackageName,
int clientUid,
int clientPid,
/*out*/
sp<ICamera>* device) {
ATRACE_CALL();
Status ret = Status::ok();
String8 id = String8::format("%d", cameraId);
sp<Client> client = nullptr;
ret = connectHelper<ICameraClient,Client>(cameraClient, id,
CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName, clientUid, clientPid, API_1,
/*legacyMode*/ false, /*shimUpdateOnly*/ false,
/*out*/client);
if(!ret.isOk()) {
logRejected(id, getCallingPid(), String8(clientPackageName),
ret.toString8());
return ret;
}
*device = client;
return ret;
}
主要通过connectHelper去创建、初始化CameraClient(时序10、11):
template<class CALLBACK, class CLIENT>
Status CameraService::connectHelper(const sp<CALLBACK>& cameraCb, const String8& cameraId,
int halVersion, const String16& clientPackageName, int clientUid, int clientPid,
apiLevel effectiveApiLevel, bool legacyMode, bool shimUpdateOnly,
/*out*/sp<CLIENT>& device) {
//省略
//构造CameraClient
sp<BasicClient> tmp = nullptr;
if(!(ret = makeClient(this, cameraCb, clientPackageName, cameraId, facing, clientPid,
clientUid, getpid(), legacyMode, halVersion, deviceVersion, effectiveApiLevel,
/*out*/&tmp)).isOk()) {
return ret;
}
client = static_cast<CLIENT*>(tmp.get());
LOG_ALWAYS_FATAL_IF(client.get() == nullptr, "%s: CameraService in invalid state",
__FUNCTION__);
//初始化CameraClient
err = client->initialize(mCameraProviderManager);
// Update shim paremeters for legacy clients
if (effectiveApiLevel == API_1) {
// Assume we have always received a Client subclass for API1
sp<Client> shimClient = reinterpret_cast<Client*>(client.get());
String8 rawParams = shimClient->getParameters();
CameraParameters params(rawParams);
auto cameraState = getCameraState(cameraId);
if (cameraState != nullptr) {
cameraState->setShimParams(params);
} else {
ALOGE("%s: Cannot update shim parameters for camera %s, no such device exists.",
__FUNCTION__, cameraId.string());
}
}
if (shimUpdateOnly) {
// If only updating legacy shim parameters, immediately disconnect client
mServiceLock.unlock();
client->disconnect();
mServiceLock.lock();
} else {
// Otherwise, add client to active clients list
finishConnectLocked(client, partial);
}
} // lock is destroyed, allow further connect calls
// Important: release the mutex here so the client can call back into the service from its
// destructor (can be at the end of the call)
device = client;
return ret;
}
而初始化CameraClient的实现如下(frameworks/av/services/camera/libcameraservice/api1/CameraClient.cpp):
status_t CameraClient::initialize(sp<CameraProviderManager> manager) {
int callingPid = getCallingPid();
status_t res;
LOG1("CameraClient::initialize E (pid %d, id %d)", callingPid, mCameraId);
// Verify ops permissions
res = startCameraOps();
if (res != OK) {
return res;
}
char camera_device_name[10];
snprintf(camera_device_name, sizeof(camera_device_name), "%d", mCameraId);
mHardware = new CameraHardwareInterface(camera_device_name);
res = mHardware->initialize(manager);
if (res != OK) {
ALOGE("%s: Camera %d: unable to initialize device: %s (%d)",
__FUNCTION__, mCameraId, strerror(-res), res);
mHardware.clear();
return res;
}
mHardware->setCallbacks(notifyCallback,
dataCallback,
dataCallbackTimestamp,
handleCallbackTimestampBatch,
(void *)(uintptr_t)mCameraId);
// Enable zoom, error, focus, and metadata messages by default
enableMsgType(CAMERA_MSG_ERROR | CAMERA_MSG_ZOOM | CAMERA_MSG_FOCUS |
CAMERA_MSG_PREVIEW_METADATA | CAMERA_MSG_FOCUS_MOVE);
LOG1("CameraClient::initialize X (pid %d, id %d)", callingPid, mCameraId);
return OK;
}
这时候发现CameraClient::initialize里面的几乎都是硬件相关的函数调用了(时序12),至于更底层的HAL层实现与具体的硬件厂商有关,我们的分析就到此为止了。
关于takePicture的过程,也是跟open的过程类似的,也是从Java层API入手,通过JNI关联Native层,然后通过进程间通信掉用CameraService,最终调HAL层。HAL层再通过类似的相反过程把数据返回给Java层API,最终回到最初调用者Application。需要注意的地方是:
- 相机open之后的操作,例如takePicture,并不是直接跟CameraService进行交互了,而是跟CameraService创建的CameraClient进行交互。
还记得连接CameraService的时候,connect中获取CameraService之后,然后通过fnConnectService进行连接服务。其中fnConnectService是一个函数指针,指向的是BpCameraService::connect方法。这里的fnConnectService的原型是:
CameraTraits<Camera>::TCamConnectService CameraTraits<Camera>::fnConnectService =
&::android::hardware::ICameraService::connect;
而在CameraService中connect的真正实现是这样的:
Status CameraService::connect(
const sp<ICameraClient>& cameraClient,
int cameraId,
const String16& clientPackageName,
int clientUid,
int clientPid,
/*out*/
sp<ICamera>* device) {
ATRACE_CALL();
Status ret = Status::ok();
String8 id = String8::format("%d", cameraId);
sp<Client> client = nullptr;
ret = connectHelper<ICameraClient,Client>(cameraClient, id,
CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName, clientUid, clientPid, API_1,
/*legacyMode*/ false, /*shimUpdateOnly*/ false,
/*out*/client);
if(!ret.isOk()) {
logRejected(id, getCallingPid(), String8(clientPackageName),
ret.toString8());
return ret;
}
*device = client;
return ret;
}
Application Framework这一端需要把CameraClient的指针传到CameraService这边来。而CameraService创建并且初始化CameraClient的,并且对这个指针赋值。那边Application Framework以及CameraService双方都持有CameraClient的指针的了。open之后的具体操作都将交由CameraClient进行处理,并负责与HAL层打交道。
总结
根据上面的源码分析,我们可以看到整个Android架构的运作是这样的:
主要关键点有:
- Application通过JavaAPI去开始了Camera的调用。
- 通过JNI的方式,调用了Native Framework的方法。
- Native Framework通过进程间通信(调用Binder IPC的代理),Application Framework与System Service(CameraService)通信。
- 最终由CameraService提供服务,并且与HAL层进行打交道。同时HAL层的数据也通过上述的途径返回给Application。
- 整个Android系统的架构通过这几层的配合,有条不紊地运作起来。
注意:
- Application Framework的实现可以是Java,也可以是Native。(最终通过JNI进行关联)
- 同理,System Service的实现也可以是Java,也可以是Native。
- HAL层的实现与硬件厂商有关,我们一般不需要关心。
相关文章推荐
http://www.jianshu.com/p/3bac6334f095
如果觉得我的文字对你有所帮助的话,欢迎关注我的公众号:
我的群欢迎大家进来探讨各种技术与非技术的话题,有兴趣的朋友们加我私人微信huannan88,我拉你进群交(♂)流(♀)。