Android Binder機制淺析(三)
接上文...
本文根據網上現有資源進行整合,以及自己的理解,有誤之處歡迎指正~~
三、MediaService的執行
由2.6中的分析,可知defaultServiceManager得到了BpServiceManager,
然後MediaPlayerService 例項化後,呼叫BpServiceManager的addService函式
這個過程中,是service_manager收到addService的請求,然後把對應資訊放到自己儲存的一個服務list中
到這兒,可看到,service_manager有一個binder_looper函式(在2.8中後部分),專門等著從binder
同樣,我們建立了MediaPlayerService即BnMediaPlayerService,那它也應該有一下功能:
1. 開啟binder裝置
2. 也搞一個looper迴圈,然後坐等請求
但是MediaPlayerService的建構函式中,沒有看到顯示的開啟binder裝置,就檢視它的父類,即BnXXX的工作
3.1 MediaPlayerService開啟binder
目錄在frameworks/av/media/libmediaplayerservice/MediaPlayerService.h
可知 MediaPlayerService從BnMediaPlayerService派生,而BnMediaPlayerService從BnInterface和IMediaPlayerService同時派生,於是乎再追BnMediaPlayerService和BnInterface ,
目錄frameworks/native/include/binder/IInterface.h
class BnInterface : public INTERFACE, public BBinder
{
public:
virtual sp<IInterface> queryLocalInterface(const String16& _descriptor);
virtual const String16& getInterfaceDescriptor() const;
protected:
virtual IBinder* onAsBinder();
};
進行代入兌現後
class BnInterface : public IMediaPlayerService, public BBinder
{
...
}
思考BBinder是什麼?與BpBinder類似?
BBinder::BBinder() : mExtras(nullptr)
{
// 和BnXXX與BpXXX對應的
// 然而此處沒有開啟裝置的地方
}
但是每個Service都有對應的binder裝置fd
...
再次回到main_mediaservice一開始的地方,在ProcessState已經開啟過binder了
3.2 looper
開啟binder裝置的地方和程序相關的,一個程序開啟一個就可以了。
在第二章的一開始main處,就有類似的訊息迴圈looper操作
>> ProcessState::self()->startThreadPool();
>> IPCThreadState::self()->joinThreadPool();
先看看startThreadPool
void ProcessState::startThreadPool()
{
AutoMutex _l(mLock);
...
spawnPooledThread(true);
}
}
void ProcessState::spawnPooledThread(bool isMain)
{
if (mThreadPoolStarted) {
...
sp<Thread> t = new PoolThread(isMain);
// isMain就是true,建立執行緒池,然後run起來
t->run(name.string());
}
}
// PoolThread從Thread類中派生,
class PoolThread : public Thread
{
public:
explicit PoolThread(bool isMain)
: mIsMain(isMain)
{
}
...
目錄system/core/libutils/Threads.cpp
Thread::Thread(bool canCallJava)
: mCanCallJava(canCallJava),
mThread(thread_id_t(-1)),
mLock("Thread::mLock"),
mStatus(NO_ERROR),
mExitPending(false), mRunning(false)
{
}
// 此時,仍未建立執行緒,然後呼叫PoolThread::run,實際呼叫基類的run
status_t Thread::run(const char* name, int32_t priority, size_t stack)
{
LOG_ALWAYS_FATAL_IF(name == nullptr, "thread name not provided to Thread::run");
Mutex::Autolock _l(mLock);
...
mStatus = NO_ERROR;
mExitPending = false;
mThread = thread_id_t(-1);
...
bool res;
if (mCanCallJava) {
>> res = createThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
} else {
res = androidCreateRawThreadEtc(_threadLoop,
this, name, priority, stack, &mThread);
}
...
// 終於,在run函式中,建立執行緒了,從這主執行緒執行
IPCThreadState::self()->joinThreadPool();
還是先追_threadLoop
int Thread::_threadLoop(void* user)
{
Thread* const self = static_cast<Thread*>(user);
sp<Thread> strong(self->mHoldSelf);
wp<Thread> weak(strong);
self->mHoldSelf.clear();
#if defined(__ANDROID__)
// this is very useful for debugging with gdb
self->mTid = gettid();
#endif
do {
bool result;
...
if (result && !self->exitPending()) {
// Binder threads (and maybe others) rely on threadLoop
// running at least once after a successful ::readyToRun()
// (unless, of course, the thread has already been asked to exit
// at that point).
// This is because threads are essentially used like this:
// (new ThreadSubclass())->run();
// The caller therefore does not retain a strong reference to
// the thread and the thread would simply disappear after the
// successful ::readyToRun() call instead of entering the
// threadLoop at least once.
result = self->threadLoop();
// 呼叫自己的threadloop
}
} else {
result = self->threadLoop();
}
建立的PoolThread物件,由此呼叫PoolThread的threadLoop函式
// 這是一個新的執行緒,所以必然會建立一個新的IPCThreadState物件(執行緒本地儲存 TLS)
virtual bool threadLoop()
{
IPCThreadState::self()->joinThreadPool(mIsMain);
return false;
}
const bool mIsMain;
};
// 主執行緒和工作執行緒都呼叫了joinThreadPool,於是追
void IPCThreadState::joinThreadPool(bool isMain)
{
LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());
>> mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);
status_t result;
do {
processPendingDerefs();
// now get the next command to be processed, waiting if necessary
result = getAndExecuteCommand();
...
} while (result != -ECONNREFUSED && result != -EBADF);
LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%d\n",
(void*)pthread_self(), getpid(), result);
>> mOut.writeInt32(BC_EXIT_LOOPER);
talkWithDriver(false);
}
這邊有loopl了,但是好像有兩個執行緒都執行了這個?
先看看getAndExecuteCommand
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver();
if (result >= NO_ERROR) {
size_t IN = mIn.dataAvail();
if (IN < sizeof(int32_t)) return result;
>> cmd = mIn.readInt32();
IF_LOG_COMMANDS() {
alog << "Processing top-level Command: "
<< getReturnString(cmd) << endl;
}
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount++;
if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs == 0) {
mProcess->mStarvationStartTimeMs = uptimeMillis();
}
pthread_mutex_unlock(&mProcess->mThreadCountLock);
>> result = executeCommand(cmd);
pthread_mutex_lock(&mProcess->mThreadCountLock);
mProcess->mExecutingThreadsCount--;
if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
mProcess->mStarvationStartTimeMs != 0) {
int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
if (starvationTimeMs > 100) {
ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
mProcess->mMaxThreads, starvationTimeMs);
}
mProcess->mStarvationStartTimeMs = 0;
}
pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
pthread_mutex_unlock(&mProcess->mThreadCountLock);
}
return result;
}
發現getAndExecuteCommand呼叫到了executeCommand,於是再追
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
...
case BR_TRANSACTION:
{
binder_transaction_data tr;
result = mIn.read(&tr, sizeof(tr));
// 來了一個命令,解析成BR_TRANSACTION,然後讀取後續的資訊
...
if (tr.target.ptr)
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
if(!bFind){
error = NO_ERROR;
}else
// 這裡用的是BBinder
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
再追BBinder
目錄在frameworks/native/libs/binder/Binder.cpp
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
...
// 就是在呼叫自己的onTransact函式
err = onTransact(code, data, reply, flags);
break;
}
BnMediaPlayerService從BBinder派生,故呼叫了onTransact函式,
最後,看看onTransact函式
status_t BBinder::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t /*flags*/)
{
// 正如3.1開始提到的,BnMediaPlayerService從BBinder和IMediaPlayerService派生,所有IMediaPlayerService提供的函式都通過命令型別來區分
switch (code) {
...
case SHELL_COMMAND_TRANSACTION: {
int in = data.readFileDescriptor();
int out = data.readFileDescriptor();
int err = data.readFileDescriptor();
int argc = data.readInt32();
Vector<String16> args;
for (int i = 0; i < argc && data.dataAvail() > 0; i++) {
args.add(data.readString16());
}
sp<IShellCallback> shellCallback = IShellCallback::asInterface(
data.readStrongBinder());
sp<IResultReceiver> resultReceiver = IResultReceiver::asInterface(
data.readStrongBinder());
...
if (resultReceiver != NULL) {
resultReceiver->send(INVALID_OPERATION);
}
}
...
default:
return UNKNOWN_TRANSACTION;
}
}
小結:到這裡,可以看見,BnXXX的onTransact函式收取命令,然後派發到派生類的函式,由他們完成實際的工作。
但是這裡有點特殊,startThreadPool和joinThreadPool完後確實有兩個執行緒,主執行緒和工作執行緒,而且都在做訊息迴圈。為什麼要這麼做呢?他們引數isMain都是true。Google原生操作。估計是怕一個執行緒工作量太多,所以搞兩個執行緒工作?這種解釋應該也是合理的。
四、MediaPlayerService的執行
MediaPlayerClient如何與MediaPlayerService互動的,在使用MediaPlayerService時,需要先建立一個BpMediaPlayerService.
目錄在frameworks/av/media/libmedia/IMediaDeathNotifier.cpp
>> IMediaDeathNotifier::getMediaPlayerService()
{
if (sMediaPlayerService == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
// 想ServiceManager查詢對應服務的資訊,返回給binder
do {
binder = sm->getService(String16("media.player"));
if (binder != 0) {
break;
}
ALOGW("Media player service not published, waiting...");
usleep(500000); // 0.5 s
} while (true);
...
binder->linkToDeath(sDeathNotifier);
// 通過interface_cast,將這個binder轉換成BpMediaPlayerService,
// 這個binder只是用來和binder裝置通訊用的,實際上和IMediaPlayerService的功能沒有一點關係。
sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
}
ALOGE_IF(sMediaPlayerService == 0, "no media player service!?");
return sMediaPlayerService;
}
// 這是一種Bridge模式,BpMediaPlayerService用這個binder和BnMediaPlayerService通訊
Binder其實就是一個和binder裝置打交道的介面,而上層IMediaPlayerService只不過把它當做一個類似socket使用罷了。binder和上層類IMediaPlayerService的功能容易混淆。
注:Native層的實現(資料補充說明)
getMediaPlayerService是C++層的
int main()
{
getMediaPlayerService();
// 直接呼叫這個函式能獲得BpMediaPlayerService嗎?
// 不能,為什麼?因為我還沒開啟binder驅動吶!但是你在JAVA應用程式裡邊卻有google
// 已經替你封裝好了。
// 所以,純native層的程式碼,必須也得像下面這樣處理:
sp<ProcessState> proc(ProcessState::self());
// 這個其實不是必須的,因為好多地方都需要這個,所以自動也會建立.
getMediaPlayerService();
// 還得起訊息迴圈吶,否則如果Bn那邊有訊息通知你,你怎麼接受得到呢?
ProcessState::self()->startThreadPool();
// 至於主執行緒是否也需要呼叫訊息迴圈,就看個人而定了。不過一般是等著接收其他來源的訊息,例如socket發來的命令,然後控制MediaPlayerService就可以了。
}
五、總結
至此,Binder就算分析完了,大家看完後,應該能做到以下幾點:
>> 如果需要寫自己的Service的話,總得知道系統是怎麼個呼叫你的函式,恩。對。有2個執行緒在那不停得從binder裝置中收取命令,然後呼叫你的函式呢。恩,這是個多執行緒問題。
>> 如果需要跟蹤bug的話,得知道從Client端呼叫的函式,是怎麼最終傳到到遠端的Service。這樣,對於一些函式呼叫,Client端跟蹤完了,我就知道轉到Service去看對應函式呼叫了。反正是同步方式。也就是Client一個函式呼叫會一直等待到Service返回為止。