如果有Java层,BpBinder监控也可以理解为BinderProxy的监控。
systemReady
frameworks/base/services/core/java/com/android/server/am/ActivityManagerService.java
/**
* The number of binder proxies we need to have before we start warning and
* dumping debug info.
*/
private static final int BINDER_PROXY_HIGH_WATERMARK = 6000;
/**
* Low watermark that needs to be met before we consider dumping info again,
* after already hitting the high watermark.
*/
private static final int BINDER_PROXY_LOW_WATERMARK = 5500;
t.traceBegin("setBinderProxies");BinderInternal.nSetBinderProxyCountWatermarks(BINDER_PROXY_HIGH_WATERMARK,
BINDER_PROXY_LOW_WATERMARK);
BinderInternal.nSetBinderProxyCountEnabled(true);
BinderInternal.setBinderProxyCountCallback(
(uid) -> {
Slog.wtf(TAG, "Uid " + uid + " sent too many Binders to uid "
+ Process.myUid());
BinderProxy.dumpProxyDebugInfo();
if (uid == Process.SYSTEM_UID) {
Slog.i(TAG, "Skipping kill (uid is SYSTEM)");
} else {
killUid(UserHandle.getAppId(uid), UserHandle.getUserId(uid),
"Too many Binders sent to SYSTEM");
// We need to run a GC here, because killing the processes involved
// actually isn't guaranteed to free up the proxies; in fact, if the
// GC doesn't run for a long time, we may even exceed the global
// proxy limit for a process (20000), resulting in system_server itself
// being killed.
// Note that the GC here might not actually clean up all the proxies,
// because the binder reference decrements will come in asynchronously;
// but if new processes belonging to the UID keep adding proxies, we
// will get another callback here, and run the GC again - this time
// cleaning up the old proxies.
VMRuntime.getRuntime().requestConcurrentGC();
}
}, mHandler);
t.traceEnd(); // setBinderProxies
在systemReady的时候会设置BinderProxy监控数量的回调。
nSetBinderProxyCountWatermarks
/**
* Set the Binder Proxy watermarks. Default high watermark = 2500. Default low watermark = 2000
* @param high The limit at which the BinderProxyListener callback will be called.
* @param low The threshold a binder count must drop below before the callback
* can be called again. (This is to avoid many repeated calls to the
* callback in a brief period of time)
*/
public static final native void nSetBinderProxyCountWatermarks(int high, int low);
它是一个native函数:
frameworks/base/core/jni/android_util_Binder.cpp
static const JNINativeMethod gBinderInternalMethods[] = {
......
{ "nSetBinderProxyCountWatermarks", "(II)V", (void*)android_os_BinderInternal_setBinderProxyCountWatermarks}
};
static void android_os_BinderInternal_setBinderProxyCountWatermarks(JNIEnv* env, jobject clazz,
jint high, jint low)
{
BpBinder::setBinderProxyCountWatermarks(high, low);
}
void BpBinder::setBinderProxyCountWatermarks(int high, int low) {
AutoMutex _l(sTrackingLock);
sBinderProxyCountHighWatermark = high;
sBinderProxyCountLowWatermark = low;
}
setBinderProxyCountWatermarks会重新设置sBinderProxyCountHighWatermark和sBinderProxyCountLowWatermark的值。
// Arbitrarily high value that probably distinguishes a bad behaving app
uint32_t BpBinder::sBinderProxyCountHighWatermark = 2500;
// Another arbitrary value a binder count needs to drop below before another callback will be called
uint32_t BpBinder::sBinderProxyCountLowWatermark = 2000;
sBinderProxyCountHighWatermark的值默认为2500,sBinderProxyCountLowWatermark默认为2000.
sBinderProxyCountHighWatermark意味着高于此值的时候,会触发回调。
sBinderProxyCountLowWatermark意味着binder proxy的数量下降到低于该值时,才能再次触发回调。
nSetBinderProxyCountEnabled
/**
* Enable/disable Binder Proxy Instance Counting by Uid. While enabled, the set callback will
* be called if this process holds too many Binder Proxies on behalf of a Uid.
* @param enabled true to enable counting, false to disable
*/
public static final native void nSetBinderProxyCountEnabled(boolean enabled);
用于启用/禁用按UID计数的Binder Proxy实例。当启用时,如果该进程代表某个UID持有太多的Binder Proxy实例,将会调用设置的回调函数
static const JNINativeMethod gBinderInternalMethods[] = {
......
{ "nSetBinderProxyCountEnabled", "(Z)V", (void*)android_os_BinderInternal_setBinderProxyCountEnabled },
static void android_os_BinderInternal_setBinderProxyCountEnabled(JNIEnv* env, jobject clazz,
jboolean enable)
{
BpBinder::setCountByUidEnabled((bool) enable);
}
void BpBinder::setCountByUidEnabled(bool enable) { sCountByUidEnabled.store(enable); }
std::atomic_bool BpBinder::sCountByUidEnabled(false);
sCountByUidEnabled是原子的,默认为false,这里赋值为true
setBinderProxyCountCallback
/**
* Set a callback to be triggered when a uid's Binder Proxy limit is reached for this process.
* @param listener OnLimitReached of listener will be called in the thread provided by handler
* @param handler must not be null, callback will be posted through the handler;
*
*/
public static void setBinderProxyCountCallback(BinderProxyLimitListener listener,
@NonNull Handler handler) {
Preconditions.checkNotNull(handler,
"Must provide NonNull Handler to setBinderProxyCountCallback when setting "
+ "BinderProxyLimitListener");
sBinderProxyLimitListenerDelegate.setListener(listener, handler);
}
设置监听回调的listener以及处理所在的handler。当binder proxy达到阈值的时候,会回调该listener。
static final BinderProxyLimitListenerDelegate sBinderProxyLimitListenerDelegate =
new BinderProxyLimitListenerDelegate();
static private class BinderProxyLimitListenerDelegate {
private BinderProxyLimitListener mBinderProxyLimitListener;
private Handler mHandler;
void setListener(BinderProxyLimitListener listener, Handler handler) {
synchronized (this) {
mBinderProxyLimitListener = listener;
mHandler = handler;
}
}
void notifyClient(final int uid) {
synchronized (this) {
if (mBinderProxyLimitListener != null) {
mHandler.post(new Runnable() {
@Override
public void run() {
mBinderProxyLimitListener.onLimitReached(uid);
}
});
}
}
}
}
可以看到这里只是设置一个监听,拿回调监听的函数为notifyClient,看下这个函数的在哪里调用的:
/**
* Callback used by native code to trigger a callback in java code. The callback will be
* triggered when too many binder proxies from a uid hits the allowed limit.
* @param uid The uid of the bad behaving app sending too many binders
*/
public static void binderProxyLimitCallbackFromNative(int uid) {
sBinderProxyLimitListenerDelegate.notifyClient(uid);
}
这个函数由native调用:
static int int_register_android_os_BinderInternal(JNIEnv* env)
{
jclass clazz = FindClassOrDie(env, kBinderInternalPathName);
gBinderInternalOffsets.mProxyLimitCallback = GetStaticMethodIDOrDie(env, clazz, "binderProxyLimitCallbackFromNative", "(I)V");
return RegisterMethodsOrDie(
env, kBinderInternalPathName,
gBinderInternalMethods, NELEM(gBinderInternalMethods));
}
可以看到当jni初始化的时候,就把binderProxyLimitCallbackFromNative的方法ID赋值给了gBinderInternalOffsets.mProxyLimitCallback。那看下调用方法的地方:
static void android_os_BinderInternal_proxyLimitcallback(int uid)
{
JNIEnv *env = AndroidRuntime::getJNIEnv();
env->CallStaticVoidMethod(gBinderInternalOffsets.mClass,
gBinderInternalOffsets.mProxyLimitCallback,
uid);
if (env->ExceptionCheck()) {
ScopedLocalRef<jthrowable> excep(env, env->ExceptionOccurred());
binder_report_exception(env, excep.get(),
"*** Uncaught exception in binderProxyLimitCallbackFromNative");
}
}
由jni函数android_os_BinderInternal_proxyLimitcallback调用,那它又是哪里调用的呢?
static int int_register_android_os_BinderInternal(JNIEnv* env)
{
jclass clazz = FindClassOrDie(env, kBinderInternalPathName);
...
BpBinder::setLimitCallback(android_os_BinderInternal_proxyLimitcallback);
return RegisterMethodsOrDie(
env, kBinderInternalPathName,
gBinderInternalMethods, NELEM(gBinderInternalMethods));
}
可以看到jni注册时候,把android_os_BinderInternal_proxyLimitcallback设置为了BpBinder的回调函数。
binder_proxy_limit_callback BpBinder::sLimitCallback;
void BpBinder::setLimitCallback(binder_proxy_limit_callback cb) {
AutoMutex _l(sTrackingLock);
sLimitCallback = cb;
}
android_os_BinderInternal_proxyLimitcallback又赋值给了sLimitCallback,那sLimitCallback又是哪里调用的呢?
sp<BpBinder> BpBinder::create(int32_t handle) {
int32_t trackedUid = -1;
if (sCountByUidEnabled) {
trackedUid = IPCThreadState::self()->getCallingUid();
AutoMutex _l(sTrackingLock);
uint32_t trackedValue = sTrackingMap[trackedUid];
if (CC_UNLIKELY(trackedValue & LIMIT_REACHED_MASK)) {
if (sBinderProxyThrottleCreate) {
return nullptr;
}
trackedValue = trackedValue & COUNTING_VALUE_MASK;
uint32_t lastLimitCallbackAt = sLastLimitCallbackMap[trackedUid];
if (trackedValue > lastLimitCallbackAt &&
(trackedValue - lastLimitCallbackAt > sBinderProxyCountHighWatermark)) {
ALOGE("Still too many binder proxy objects sent to uid %d from uid %d (%d proxies "
"held)",
getuid(), trackedUid, trackedValue);
if (sLimitCallback) sLimitCallback(trackedUid);
sLastLimitCallbackMap[trackedUid] = trackedValue;
}
} else {
if ((trackedValue & COUNTING_VALUE_MASK) >= sBinderProxyCountHighWatermark) {
ALOGE("Too many binder proxy objects sent to uid %d from uid %d (%d proxies held)",
getuid(), trackedUid, trackedValue);
sTrackingMap[trackedUid] |= LIMIT_REACHED_MASK;
if (sLimitCallback) sLimitCallback(trackedUid);
sLastLimitCallbackMap[trackedUid] = trackedValue & COUNTING_VALUE_MASK;
if (sBinderProxyThrottleCreate) {
ALOGI("Throttling binder proxy creates from uid %d in uid %d until binder proxy"
" count drops below %d",
trackedUid, getuid(), sBinderProxyCountLowWatermark);
return nullptr;
}
}
}
sTrackingMap[trackedUid]++;
}
return sp<BpBinder>::make(BinderHandle{handle}, trackedUid);
}
- 每一个uid都对应一个值,如果uid相同,则值++,所以这里记录了以uid为key,以数量为value的数据。
- 当value值的数量超过sBinderProxyCountHighWatermark的时候,会执行sLimitCallback回调函数。
- 返回BpBinder对象。
注意:有个地方需要注意,就是trackedUid和getuid
trackedUid = IPCThreadState::self()->getCallingUid();请求者的Uid
getuid() 当前进程的Uid
所以这个地方BpBinder的创建是可以定位到源头的,即哪个进程发送binder请求导致的BpBinder的不断创建的。
setBinderProxyCountCallback
再次回到AMS systemReady的setBinderProxyCountCallback,当回调执行的时候:
BinderInternal.setBinderProxyCountCallback(
(uid) -> {
Slog.wtf(TAG, "Uid " + uid + " sent too many Binders to uid "
+ Process.myUid());
BinderProxy.dumpProxyDebugInfo();
if (uid == Process.SYSTEM_UID) {
Slog.i(TAG, "Skipping kill (uid is SYSTEM)");
} else {
killUid(UserHandle.getAppId(uid), UserHandle.getUserId(uid),
"Too many Binders sent to SYSTEM");
// We need to run a GC here, because killing the processes involved
// actually isn't guaranteed to free up the proxies; in fact, if the
// GC doesn't run for a long time, we may even exceed the global
// proxy limit for a process (20000), resulting in system_server itself
// being killed.
// Note that the GC here might not actually clean up all the proxies,
// because the binder reference decrements will come in asynchronously;
// but if new processes belonging to the UID keep adding proxies, we
// will get another callback here, and run the GC again - this time
// cleaning up the old proxies.
VMRuntime.getRuntime().requestConcurrentGC();
}
}, mHandler);
- 把每个进程的BinderProxy的数量写入日志
- 如果应用BinderProxy的数量超过6000,会杀掉该应用。
如哪里有错误,请予以指正。