AIDL解析过程

AIDL是(Android Interface Definition Language)的缩写,即Android接口定义语言。

AIDL主要用来解决多进程通信问题,对其底层的一些封装。IDE对于.aidl文件支持模板生成对应同名的java代码,另外需要注意的是,我们可以不使用IDE的模版,而是自己手动书写相关的代码。

文件定义:

  1. 新建.aidl文件,在Android Studio中需要在java的同级目录创建一个aidl目录,将所有的.aidl文件放入其中,根据包名进行组织。
  2. 支持的数据类型:默认数据类型byte、short、int、long、float、double、boolean、char、String、CharSequence、List、Map;其他类型的引用类型(必须是全包名路径导入,即使在同一个包下)。
  3. 定义完成之后使用Android Studio的make project会自动生成对应的java文件。而已参看示例代码中的aidl文件夹中的定义。
  4. 编写服务端:请查看RemoteService。
  5. 绑定服务:可以查看BindingActivity中的代码。

创建aidl文件及编译过程不做重点介绍,如果不熟悉请自行查看相关资料,本文主要讲解原理性的东西。

以下内容以IRemoteService为例,绑定Service成功后:

private ServiceConnection mConnection = new ServiceConnection() {

        @Override
        public void onServiceConnected(ComponentName className, IBinder service) {
            mService = IRemoteService.Stub.asInterface(service);
            ......
        }

        public void onServiceDisconnected(ComponentName className) {
            mService = null;
            ......
        }
};

这里需要注意的是onServiceConnected的第二个参数:如果通信双方在同一个进程,那么其对应Service.onBind()对应的对象;如果通信双方不再同一个进程,那么其对应BinderProxy对象,在Binder类中进行定义。至于为什么请查看AIDL进阶。

public static abstract class Stub extends android.os.Binder implements com.android.samll.aidl.IRemoteService {
       private static final java.lang.String DESCRIPTOR = "com.android.samll.aidl.IRemoteService";
       public Stub() {
           this.attachInterface(this, DESCRIPTOR);
       }

       public static com.android.samll.aidl.IRemoteService asInterface(android.os.IBinder obj) {
           if ((obj==null)) {
                return null;
           }
           android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
           if (((iin!=null)&&(iin instanceof com.android.samll.aidl.IRemoteService))) {
                 return ((com.android.samll.aidl.IRemoteService)iin);
           }
           return new com.android.samll.aidl.IRemoteService.Stub.Proxy(obj);
       }
       ......
}

从上述的代码可以看出:如果通信双方在同一进程,那么此时asInterface方法传递的对象就是Service.onBind(),即Stub对象。

Stub在构造函数中调用的attachInterface:

public void attachInterface(IInterface owner, String descriptor) {
        mOwner = owner;
        mDescriptor = descriptor;
}

asInterface方法中的obj.queryLocalInterface(DESCRIPTOR):

public IInterface queryLocalInterface(String descriptor) {
        if (mDescriptor.equals(descriptor)) {
            return mOwner;
        }
        return null;
}

可知通信双方在同一进程中时,不会走Binder进程IPC。另外需要注意的是这个方法还能判断出通信是在同一进程还是跨进程。

接下来我们重点查看跨进程的情况!!!

new com.android.samll.aidl.IRemoteService.Stub.Proxy(obj):

private static class Proxy implements com.android.samll.aidl.IRemoteService {
         private android.os.IBinder mRemote;

         Proxy(android.os.IBinder remote) {
             mRemote = remote;
         }
         ......
}

这里我们只需要关心remote参数是BinderProxy对象实例即可。

private ServiceConnection mConnection = new ServiceConnection() {

    @Override
    public void onServiceConnected(ComponentName className, IBinder service) {
        mService = IRemoteService.Stub.asInterface(service);
        try {
            mService.registerCallback(mCallback);
        } catch (RemoteException e) {
                e.printStackTrace();
        }
        ......
    }

    @Override
    public void onServiceDisconnected(ComponentName className) {
        mService = null;
        ......
    }
};

根据上述的分析,mService对象是Proxy的实例,查看其registerCallback方法:

private static class Proxy implements com.android.samll.aidl.IRemoteService {
    private android.os.IBinder mRemote;
    Proxy(android.os.IBinder remote) {
        mRemote = remote;
    }

    ......

    @Override 
    public void registerCallback(com.android.samll.aidl.IRemoteServiceCallback cb) throws android.os.RemoteException
    {
        android.os.Parcel _data = android.os.Parcel.obtain();
        android.os.Parcel _reply = android.os.Parcel.obtain();
        try {
                _data.writeInterfaceToken(DESCRIPTOR);
                _data.writeStrongBinder((((cb!=null))?(cb.asBinder()):(null)));
                mRemote.transact(Stub.TRANSACTION_registerCallback, _data, _reply, 0);
                _reply.readException();
        }
        finally {
        _reply.recycle();
        _data.recycle();
        }
    }
    ......
}

重点查看transact方法,注意此时是BinderProxy的transact方法:

final class BinderProxy implements IBinder {
    public native boolean pingBinder();
    public native boolean isBinderAlive();

    public IInterface queryLocalInterface(String descriptor) {
        return null;
    }

    public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
        Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
        if (Binder.isTracingEnabled()) { Binder.getTransactionTracker().addTrace(); }
        return transactNative(code, data, reply, flags);
    }

    public native boolean transactNative(int code, Parcel data, Parcel reply, int flags) throws RemoteException;
    ......
}

以下内容参考 彻底理解ANDROID BINDER通信架构(上)Android5.0中Binder相关的ProcessState和IPCThreadState的认识

在下面的源码分析中加入示例的registerCallback过程分析调用,code为Stub.TRANSACTION_registerCallback。

transactNative本地方法的原型在android_util_Binder.cpp中,根据jni方法签名其方法对应如下:

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) {

    if (dataObj == NULL) {
        jniThrowNullPointerException(env, NULL);
        return JNI_FALSE;
    }
    // 将Java Parcel转化为C++ Parcel
    Parcel* data = parcelForJavaObject(env, dataObj);
    if (data == NULL) {
        return JNI_FALSE;
    }
    Parcel* reply = parcelForJavaObject(env, replyObj);
    if (reply == NULL && replyObj != NULL) {
        return JNI_FALSE;
    }
    // gBinderProxyOffsets.mObject保存的是BpBinder对象
    IBinder* target = (IBinder*) env->GetIntField(obj, gBinderProxyOffsets.mObject);
    if (target == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
        return JNI_FALSE;
    }
    const bool time_binder_calls = should_time_binder_calls();
    int64_t start_millis;
    if (time_binder_calls) {
        start_millis = uptimeMillis();
    }
    // BpBinder的transact方法
    status_t err = target->transact(code, *data, reply, flags);
    if (time_binder_calls) {
        conditionally_log_binder_call(start_millis, target, code);
    }
    if (err == NO_ERROR) {
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
        return JNI_FALSE;
    }
    // 根据transact的执行情况来抛出不同的错异常
    signalExceptionForError(env, obj, err, true);
    return JNI_FALSE;
}

BpBinder.transact ------> BpBinder.cpp

code为Stub.TRANSACTION_registerCallback。

status_t BpBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }
    return DEAD_OBJECT;
}

进一步查看IPCThreadState的transact方法 ------> IPCThreadState.cpp

code为Stub.TRANSACTION_registerCallback。

status_t IPCThreadState::transact(int32_t handle,uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) {
    // 数据错误检查
    status_t err = data.errorCheck();
    flags |= TF_ACCEPT_FDS;
    ......
    if (err == NO_ERROR) {
        ......
        // 先传输数据
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    // 默认情况下都是采用非oneway的方式,即需要等待服务器的返回
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        ......
    } else {
        err = waitForResponse(NULL, NULL);
    }
    return err;
}

writeTransactionData ------> IPCThreadState.cpp

code为Stub.TRANSACTION_registerCallback。

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) {
    binder_transaction_data tr;
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = statusBuffer;
        tr.offsets_size = 0;
        tr.data.ptr.offsets = NULL;
    } else {
        return (mLastError = err);
    }
    // 往驱动发送的数据最终都存入到mOut
    // cmd为BC_TRANSACTION直接存入到mOut当中
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    return NO_ERROR;
}

将数据写入mOut,此时mIn还没有数据。binder_transaction_data 结构体用于发送数据,在收到Binder驱动的返回时,返回数据有类似的数据结构。

然后执行waitForResponse()方法,当已设置oneway时, 则调用waitForResponse(NULL, NULL);当未设置oneway时, 则调用waitForResponse(reply) 或 waitForResponse(&fakeReply)。

waitForResponse -----> IPCThreadState.cpp

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) {
    int32_t cmd;
    int32_t err;
    while (1) {
        // Binder驱动程序进行交互
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        // 从驱动返回以后,mIn有数据可以读取
        // 根据收到的应答做不同的处理
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }
        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t),
                            freeBuffer, this);
                    } else {
                        err = *static_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(size_t), this);
                    continue;
                }
            }
            goto finish;
        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }
finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    return err;
}

循环执行,直到收到应答消息. 调用talkWithDriver()跟驱动交互,收到应答消息,便会写入mIn, 则根据收到的不同响应吗,执行相应的操作。

talkWithDriver函数不介绍,不了解相关知识,talkWithDriver函数其中有个ioctl函数,是设备驱动程序对设备的I/O通道管理的函数。继续查看executeCommand方法。这里的executeCommand方法的执行并不是在default分支,而是通过joinThreadPool。

void IPCThreadState::joinThreadPool(bool isMain) {
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
    set_sched_policy(mMyThreadId, SP_FOREGROUND); 
    status_t result;
    do {
        processPendingDerefs();
        result = getAndExecuteCommand();
        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            abort();
        }
        if(result == TIMED_OUT && !isMain) {
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF); 
    mOut.writeInt32(BC_EXIT_LOOPER);
    talkWithDriver(false);
}

注意此处的while循环会一直执行。

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;
    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();
        IF_LOG_COMMANDS() {
            alog << "Processing top-level Command: "
                 << getReturnString(cmd) << endl;
        }
        result = executeCommand(cmd);
        set_sched_policy(mMyThreadId, SP_FOREGROUND);
    }
    return result;
}

查看BR_TRANSACTION分支中,单独提取相关代码。

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
    switch (cmd) {
    ......
    case BR_TRANSACTION: {
            binder_transaction_data tr;
            // 读取返回数据
            result = mIn.read(&tr, sizeof(tr));
            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;
            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(size_t), freeBuffer, this);
            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
            int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
            if (gDisableBackgroundScheduling) {
                if (curPrio > ANDROID_PRIORITY_NORMAL) {
                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
                }
            } else {
                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
                    set_sched_policy(mMyThreadId, SP_BACKGROUND);
                }
            }
            Parcel reply;
            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
            }
            if (tr.target.ptr) {
                sp<BBinder> b((BBinder*)tr.cookie);
                // 请注意transact的第一个参数就是我们传递的Stub.TRANSACTION_registerCallback
                // tr.cookie里存放的是BBinder子类JavaBBinder
                const status_t error = b->transact(tr.code, buffer, &reply, tr.flags);
                if (error < NO_ERROR) reply.setError(error);

            } else {
                const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
                if (error < NO_ERROR) reply.setError(error);
            }

            if ((tr.flags & TF_ONE_WAY) == 0) {
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                sendReply(reply, 0);
            } else {
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }

            mCallingPid = origPid;
            mCallingUid = origUid;

            IF_LOG_TRANSACTIONS() {
                TextOutput::Bundle _b(alog);
            }  
        }
        break;
        ......
    }
    if (result != NO_ERROR) {
        mLastError = result;
    }
    return result;
}

transact --------> Binder.cpp

status_t BBinder::transact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) {

    data.setDataPosition(0);
    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            // 这里是客户端自己自定义的code,无法一一列举,因此放在default分支
            err = onTransact(code, data, reply, flags);
            break;
    }
    if (reply != NULL) {
        reply->setDataPosition(0);
    }
    return err;
}

然后会调用到子类的onTransact函数来处理。PCThreadState接收到了Client处的请求后,就会调用BBinder类的transact函数,并传入相关参数,BBinder类的transact函数最终调远程服务类的onTransact函数,于是,就开始真正地处理Client的请求 。

JavaBBinder.onTransact -------> android_util_Binder.cpp

virtual status_t onTransact(uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0) {
        JNIEnv* env = javavm_to_jnienv(mVM);
        IPCThreadState* thread_state = IPCThreadState::self();
        const int strict_policy_before = thread_state->getStrictModePolicy();
        thread_state->setLastTransactionBinderFlags(flags); 
        // gBinderOffsets.mExecTransact为Java层execTransact()方法
        // 在Binder注册时Binder类的execTransact()方法保存到mExecTransact变量中
        jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
            code, (int32_t)&data, (int32_t)reply, flags);
        jthrowable excep = env->ExceptionOccurred();
        const int strict_policy_after = thread_state->getStrictModePolicy();
        if (strict_policy_after != strict_policy_before) {
            thread_state->setStrictModePolicy(strict_policy_before);
            set_dalvik_blockguard_policy(env, strict_policy_before);
        }
        jthrowable excep2 = env->ExceptionOccurred();
        if (code == SYSPROPS_TRANSACTION) {
            BBinder::onTransact(code, data, reply, flags);
        }
        return res != JNI_FALSE ? NO_ERROR : UNKNOWN_TRANSACTION;

CallBooleanMethod方法其实执行的Binder类的execTransact方法,详情看上述注释。另外还需要注意的是mObject对象是Service.onBind方法返回的实例,如有疑问请查看JavaBBinder初始化

Binder.execTransact ------- > Binder.java

// 原文注释这个方法是onTransact的入口
private boolean execTransact(int code, long dataObj, long replyObj, int flags) {
        Parcel data = Parcel.obtain(dataObj);
        Parcel reply = Parcel.obtain(replyObj);
        boolean res;
        try {
            res = onTransact(code, data, reply, flags);
        } catch (RemoteException|RuntimeException e) {
            ......
            res = true;
        } catch (OutOfMemoryError e) {
            RuntimeException re = new RuntimeException("Out of memory", e);
            reply.setDataPosition(0);
            reply.writeException(re);
            res = true;
        }
        checkParcel(this, code, reply, "Unreasonably large binder reply buffer");
        reply.recycle();
        data.recycle();
        StrictMode.clearGatheredViolations();
        return res;
    }

onTransact将会调用子类的实现方法,即Service.onBind方法返回的实例,进而处理对应code的逻辑。

results matching ""

    No results matching ""