安卓ServiceManager啟動:徹底理解ServiceManager啟動流程,這一篇就夠了
基於Android 6.0的原始碼剖析, 本文詳細地講解了ServiceManager啟動流程
framework/native/cmds/servicemanager/
- service_manager.c
- binder.c
kernel/drivers/ (不同Linux分支路徑略有不同)
- staging/android/binder.c
- android/binder.c
一. 概述
ServiceManager是Binder IPC通訊過程中的守護程序,本身也是一個Binder服務,但並沒有採用libbinder中的多執行緒模型來與Binder驅動通訊,而是自行編寫了binder.c直接和Binder驅動來通訊,並且只有一個迴圈binder_loop來進行讀取和處理事務,這樣的好處是簡單而高效。
ServiceManager本身工作相對簡單,其功能:查詢和註冊服務。 對於Binder IPC通訊過程中,其實更多的情形是BpBinder和BBinder之間的通訊,比如ActivityManagerProxy和ActivityManagerService之間的通訊等。
1.1 流程圖
啟動過程主要以下幾個階段:
- 開啟binder驅動:binder_open;
- 註冊成為binder服務的大管家:binder_become_context_manager;
- 進入無限迴圈,處理client端發來的請求:binder_loop;
二. 啟動過程
ServiceManager是由
service servicemanager /system/bin/servicemanager class core user system group system critical onrestart restart healthd onrestart restart zygote onrestart restart media onrestart restart surfaceflinger onrestart restart drm
啟動Service Manager的入口函式是service_manager.c中的main()方法,程式碼如下:
2.1 main
[ -> service_manager.c]
int main(int argc, char **argv) {
struct binder_state *bs;
//開啟binder驅動,申請128k位元組大小的記憶體空間 【見小節2.2】
bs = binder_open(128*1024);
...
//成為上下文管理者 【見小節2.3】
if (binder_become_context_manager(bs)) {
return -1;
}
selinux_enabled = is_selinux_enabled(); //selinux許可權是否使能
sehandle = selinux_android_service_context_handle();
selinux_status_open(true);
if (selinux_enabled > 0) {
if (sehandle == NULL) {
abort(); //無法獲取sehandle
}
if (getcon(&service_manager_context) != 0) {
abort(); //無法獲取service_manager上下文
}
}
...
//進入無限迴圈,處理client端發來的請求 【見小節2.4】
binder_loop(bs, svcmgr_handler);
return 0;
}
2.2 binder_open
[-> servicemanager/binder.c]
struct binder_state *binder_open(size_t mapsize)
{
struct binder_state *bs;【見小節2.2.1】
struct binder_version vers;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
//通過系統呼叫陷入核心,開啟Binder裝置驅動
bs->fd = open("/dev/binder", O_RDWR);
if (bs->fd < 0) {
goto fail_open; // 無法開啟binder裝置
}
//通過系統呼叫,ioctl獲取binder版本資訊
if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) ||
(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
goto fail_open; //核心空間與使用者空間的binder不是同一版本
}
bs->mapsize = mapsize;
//通過系統呼叫,mmap記憶體對映,mmap必須是page的整數倍
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
if (bs->mapped == MAP_FAILED) {
goto fail_map; // binder裝置記憶體無法對映
}
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}
開啟binder驅動相關操作:
先呼叫open()開啟binder裝置,open()方法經過系統呼叫,進入Binder驅動,然後呼叫方法binder_open(),該方法會在Binder驅動層建立一個binder_proc
物件,再將binder_proc
物件賦值給fd->private_data,同時放入全域性連結串列binder_procsstatic HLIST_HEAD(binder_procs);)
。再通過ioctl()檢驗當前binder版本與Binder驅動層的版本是否一致。
呼叫mmap()進行記憶體對映,同理mmap()方法經過系統呼叫,對應於Binder驅動層的binder_mmap()方法,該方法會在Binder驅動層建立Binder_buffer
物件,並放入當前binder_proc的proc->buffers
連結串列。
2.2.1 binder_state
[-> servicemanager/binder.c]
struct binder_state
{
int fd; // dev/binder的檔案描述符
void *mapped; //指向mmap的記憶體地址
size_t mapsize; //分配的記憶體大小,預設為128KB
};
2.3 binder_become_context_manager
[-> servicemanager/binder.c]
int binder_become_context_manager(struct binder_state *bs) {
//通過ioctl,傳遞BINDER_SET_CONTEXT_MGR指令【見小節2.3.1】
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
成為上下文的管理者,整個系統中只有一個這樣的管理者。 通過ioctl()方法經過系統呼叫,對應於Binder驅動層的binder_ioctl()方法.
2.3.1 binder_ioctl
[-> kernel/drivers/android/binder.c]
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) {
binder_lock(__func__);
switch (cmd) {
case BINDER_SET_CONTEXT_MGR:
ret = binder_ioctl_set_ctx_mgr(filp);//【見小節2.3.2】
break;
}
case :...
}
binder_unlock(__func__);
}
根據引數BINDER_SET_CONTEXT_MGR
,最終呼叫binder_ioctl_set_ctx_mgr()方法,這個過程會持有binder_main_lock。
2.3.2 binder_ioctl_set_ctx_mgr
[-> kernel/drivers/android/binder.c]
static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
kuid_t curr_euid = current_euid();
//保證只建立一次mgr_node物件
if (binder_context_mgr_node != NULL) {
ret = -EBUSY;
goto out;
}
if (uid_valid(binder_context_mgr_uid)) {
...
} else {
//設定當前執行緒euid作為Service Manager的uid
binder_context_mgr_uid = curr_euid;
}
//建立ServiceManager實體【見小節2.3.3】
binder_context_mgr_node = binder_new_node(proc, 0, 0);
...
binder_context_mgr_node->local_weak_refs++;
binder_context_mgr_node->local_strong_refs++;
binder_context_mgr_node->has_strong_ref = 1;
binder_context_mgr_node->has_weak_ref = 1;
out:
return ret;
}
進入binder驅動,在Binder驅動中定義的靜態變數
// service manager所對應的binder_node;
static struct binder_node *binder_context_mgr_node;
// 執行service manager的執行緒uid
static kuid_t binder_context_mgr_uid = INVALID_UID;
建立了全域性的binder_node物件binder_context_mgr_node
,並將binder_context_mgr_node的強弱引用各加1.
2.3.3 binder_new_node
[-> kernel/drivers/android/binder.c]
static struct binder_node *binder_new_node(struct binder_proc *proc,
binder_uintptr_t ptr,
binder_uintptr_t cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
//首次進來為空
while (*p) {
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else
return NULL;
}
//給新建立的binder_node 分配核心空間
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (node == NULL)
return NULL;
binder_stats_created(BINDER_STAT_NODE);
// 將新建立的node物件新增到proc紅黑樹;
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = ++binder_last_id;
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE; //設定binder_work的type
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
return node;
}
在Binder驅動層建立binder_node結構體物件,並將當前binder_proc加入到binder_node
的node->proc
。並建立binder_node的async_todo和binder_work兩個佇列。
2.4 binder_loop
[-> servicemanager/binder.c]
void binder_loop(struct binder_state *bs, binder_handler func) {
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
//將BC_ENTER_LOOPER命令傳送給binder驅動,讓Service Manager進入迴圈 【見小節2.4.1】
binder_write(bs, readbuf, sizeof(uint32_t));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); //進入迴圈,不斷地binder讀寫過程
if (res < 0) {
break;
}
// 解析binder資訊 【見小節2.5】
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
break;
}
if (res < 0) {
break;
}
}
}
進入迴圈讀寫操作,由main()方法傳遞過來的引數func指向svcmgr_handler。
binder_write
通過ioctl()將BC_ENTER_LOOPER命令傳送給binder驅動,此時bwr只有write_buffer有資料,進入binder_thread_write()方法。 接下來進入for迴圈,執行ioctl(),此時bwr只有read_buffer有資料,那麼進入binder_thread_read()方法。
2.4.1 binder_write
[-> servicemanager/binder.c]
int binder_write(struct binder_state *bs, void *data, size_t len) {
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data; //此處data為BC_ENTER_LOOPER
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
//【見小節2.4.2】
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
return res;
}
根據傳遞進來的引數,初始化bwr,其中write_size大小為4,write_buffer指向緩衝區的起始地址,其內容為BC_ENTER_LOOPER請求協議號。通過ioctl將bwr資料傳送給binder驅動,則呼叫其binder_ioctl方法,如下:
2.4.2 binder_ioctl
[-> kernel/drivers/android/binder.c]
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
...
binder_lock(__func__);
thread = binder_get_thread(proc); //獲取binder_thread,為binder_open建立的binder_thread
switch (cmd) {
case BINDER_WRITE_READ: //進行binder的讀寫操作
ret = binder_ioctl_write_read(filp, cmd, arg, thread); //【見小節2.4.3】
if (ret)
goto err;
break;
case ...
}
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
binder_unlock(__func__);
...
return ret;
}
2.4.3 binder_ioctl_write_read
[-> kernel/drivers/android/binder.c]
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { //把使用者空間資料ubuf拷貝到bwr
ret = -EFAULT;
goto out;
}
if (bwr.write_size > 0) { //此時寫快取有資料【見小節2.4.4】
ret = binder_thread_write(proc, thread,
bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
...
}
if (bwr.read_size > 0) { //此時讀快取無資料
...
}
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { //將核心資料bwr拷貝到使用者空間ubuf
ret = -EFAULT;
goto out;
}
out:
return ret;
}
此處將使用者空間的binder_write_read結構體 拷貝到核心空間.
2.4.4 binder_thread_write
[-> kernel/drivers/android/binder.c]
static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) {
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
get_user(cmd, (uint32_t __user *)ptr); //獲取命令
switch (cmd) {
case BC_ENTER_LOOPER:
//設定該執行緒的looper狀態
thread->looper |= BINDER_LOOPER_STATE_ENTERED;
break;
case ...;
}
} }
從bwr.write_buffer拿出cmd資料,此處為BC_ENTER_LOOPER. 可見上層本次呼叫binder_write()方法,主要是完成設定當前執行緒的looper狀態為BINDER_LOOPER_STATE_ENTERED。
2.5 binder_parse
[-> servicemanager/binder.c]
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_NOOP: //無操作,退出迴圈
break;
case BR_TRANSACTION_COMPLETE:
break;
case BR_INCREFS:
case BR_ACQUIRE:
case BR_RELEASE:
case BR_DECREFS:
ptr += sizeof(struct binder_ptr_cookie);
break;
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
...
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
//【見小節2.5.1】
bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn); //從txn解析出binder_io資訊
//【見小節2.6】
res = func(bs, txn, &msg, &reply);
//【見小節3.4】
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
case BR_REPLY: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
...
binder_dump_txn(txn);
if (bio) {
bio_init_from_txn(bio, txn);
bio = 0;
}
ptr += sizeof(*txn);
r = 0;
break;
}
case BR_DEAD_BINDER: {
struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;
ptr += sizeof(binder_uintptr_t);
// binder死亡訊息【見小節3.3】
death->func(bs, death->ptr);
break;
}
case BR_FAILED_REPLY:
r = -1;
break;
case BR_DEAD_REPLY:
r = -1;
break;
default:
return -1;
}
}
return r;
}
解析binder資訊,此處引數ptr指向BC_ENTER_LOOPER,func指向svcmgr_handler。故有請求到來,則呼叫svcmgr_handler。
2.5.1 bio_init
[-> servicemanager/binder.c]
void bio_init(struct binder_io *bio, void *data,
size_t maxdata, size_t maxoffs)
{
size_t n = maxoffs * sizeof(size_t);
if (n > maxdata) {
...
}
bio->data = bio->data0 = (char *) data + n;
bio->offs = bio->offs0 = data;
bio->data_avail = maxdata - n;
bio->offs_avail = maxoffs;
bio->flags = 0;
}
其中
struct binder_io
{
char *data; /* pointer to read/write from */
binder_size_t *offs; /* array of offsets */
size_t data_avail; /* bytes available in data buffer */
size_t offs_avail; /* entries available in offsets array */
char *data0; //data buffer起點位置
binder_size_t *offs0; //buffer偏移量的起點位置
uint32_t flags;
uint32_t unused;
};
2.5.2 bio_init_from_txn
[-> servicemanager/binder.c]
void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;
bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;
bio->data_avail = txn->data_size;
bio->offs_avail = txn->offsets_size / sizeof(size_t);
bio->flags = BIO_F_SHARED;
}
將readbuf的資料賦給bio物件的data
2.6 svcmgr_handler
[-> service_manager.c]
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si; //【見小節2.6.1】
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
...
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len);
...
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len); //服務名
//根據名稱查詢相應服務 【見小節3.1】
handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);
//【見小節3.1.2】
bio_put_ref(reply, handle);
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len); //服務名
handle = bio_get_ref(msg); //handle【見小節3.2.3】
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
//註冊指定服務 【見小節3.2】
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
if (!svc_can_list(txn->sender_pid)) {
return -1;
}
si = svclist;
while ((n-- > 0) && si)
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
該方法的功能:查詢服務,註冊服務,以及列舉所有服務
2.6.1 svcinfo
struct svcinfo
{
struct svcinfo *next;
uint32_t handle; //服務的handle值
struct binder_death death;
int allow_isolated;
size_t len; //名字長度
uint16_t name[0]; //服務名
};
每一個服務用svcinfo結構體來表示,該handle值是在註冊服務的過程中,由服務所在程序那一端所確定的。
三. 核心工作
servicemanager的核心工作就是註冊服務和查詢服務。
3.1 do_find_service
[-> service_manager.c]
uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
//查詢相應的服務 【見小節3.1.1】
struct svcinfo *si = find_svc(s, len);
if (!si || !si->handle) {
return 0;
}
if (!si->allow_isolated) {
uid_t appid = uid % AID_USER;
//檢查該服務是否允許孤立於程序而單獨存在
if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
return 0;
}
}
//服務是否滿足查詢條件
if (!svc_can_find(s, len, spid)) {
return 0;
}
return si->handle;
}
查詢到目標服務,並返回該服務所對應的handle
3.1.1 find_svc
struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
struct svcinfo *si;
for (si = svclist; si; si = si->next) {
//當名字完全一致,則返回查詢到的結果
if ((len == si->len) &&
!memcmp(s16, si->name, len * sizeof(uint16_t))) {
return si;
}
}
return NULL;
}
從svclist服務列表中,根據服務名遍歷查詢是否已經註冊。當服務已存在svclist
,則返回相應的服務名,否則返回NULL。
當找到服務的handle, 則呼叫bio_put_ref(reply, handle),將handle封裝到reply.
3.1.2 bio_put_ref
void bio_put_ref(struct binder_io *bio, uint32_t handle) {
struct flat_binder_object *obj;
if (handle)
obj = bio_alloc_obj(bio); //[見小節3.1.3]
else
obj = bio_alloc(bio, sizeof(*obj));
if (!obj)
return;
obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
obj->type = BINDER_TYPE_HANDLE; //返回的是HANDLE型別
obj->handle = handle;
obj->cookie = 0;
}
3.1.3 bio_alloc_obj
static struct flat_binder_object *bio_alloc_obj(struct binder_io *bio)
{
struct flat_binder_object *obj;
obj = bio_alloc(bio, sizeof(*obj));//[見小節3.1.4]
if (obj && bio->offs_avail) {
bio->offs_avail--;
*bio->offs++ = ((char*) obj) - ((char*) bio->data0);
return obj;
}
bio->flags |= BIO_F_OVERFLOW;
return NULL;
}
3.1.4 bio_alloc
static void *bio_alloc(struct binder_io *bio, size_t size)
{
size = (size + 3) & (~3);
if (size > bio->data_avail) {
bio->flags |= BIO_F_OVERFLOW;
return NULL;
} else {
void *ptr = bio->data;
bio->data += size;
bio->data_avail -= size;
return ptr;
}
}
3.2 do_add_service
[-> service_manager.c]
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
if (!handle || (len == 0) || (len > 127))
return -1;
//許可權檢查【見小節3.2.1】
if (!svc_can_register(s, len, spid)) {
return -1;
}
//服務檢索【見小節3.1.1】
si = find_svc(s, len);
if (si) {
if (si->handle) {
svcinfo_death(bs, si); //服務已註冊時,釋放相應的服務【見小節3.2.2】
}
si->handle = handle;
} else {
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) { //記憶體不足,無法分配足夠記憶體
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); //記憶體拷貝服務資訊
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist; // svclist儲存所有已註冊的服務
svclist = si;
}
//以BC_ACQUIRE命令,handle為目標的資訊,通過ioctl傳送給binder驅動
binder_acquire(bs, handle);
//以BC_REQUEST_DEATH_NOTIFICATION命令的資訊,通過ioctl傳送給binder驅動,主要用於清理記憶體等收尾工作。[見小節3.3]
binder_link_to_death(bs, handle, &si->death);
return 0;
}
註冊服務的分以下3部分工作:
- svc_can_register:檢查許可權,檢查selinux許可權是否滿足;
- find_svc:服務檢索,根據服務名來查詢匹配的服務;
- svcinfo_death:釋放服務,當查詢到已存在同名的服務,則先清理該服務資訊,再將當前的服務加入到服務列表svclist;
3.2.1 svc_can_register
[-> service_manager.c]
static int svc_can_register(const uint16_t *name, size_t name_len, pid_t spid) {
const char *perm = "add";
//檢查selinux許可權是否滿足
return check_mac_perms_from_lookup(spid, perm, str8(name, name_len)) ? 1 : 0;
}
3.2.2 svcinfo_death
[-> service_manager.c]
void svcinfo_death(struct binder_state *bs, void *ptr) {
struct svcinfo *si = (struct svcinfo* ) ptr;
if (si->handle) {
binder_release(bs, si->handle);
si->handle = 0;
}
}
3.2.3 bio_get_ref
[-> servicemanager/binder.c]
uint32_t bio_get_ref(struct binder_io *bio) {
struct flat_binder_object *obj;
obj = _bio_get_obj(bio);
if (!obj)
return 0;
if (obj->type == BINDER_TYPE_HANDLE)
return obj->handle;
return 0;
}
3.3 binder_link_to_death
[-> servicemanager/binder.c]
void binder_link_to_death(struct binder_state *bs, uint32_t target, struct binder_death *death) {
struct {
uint32_t cmd;
struct binder_handle_cookie payload;
} __attribute__((packed)) data;
data.cmd = BC_REQUEST_DEATH_NOTIFICATION;
data.payload.handle = target;
data.payload.cookie = (uintptr_t) death;
binder_write(bs, &data, sizeof(data)); //[見小節3.3.1]
}
binder_write經過跟小節2.4.1一樣的方式, 進入Binder driver後,直接呼叫後進入binder_thread_write, 處理BC_REQUEST_DEATH_NOTIFICATION命令
3.3.1 binder_ioctl_write_read
[-> kernel/drivers/android/binder.c]
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { //把使用者空間資料ubuf拷貝到bwr
ret = -EFAULT;
goto out;
}
if (bwr.write_size > 0) { //此時寫快取有資料【見小節3.3.2】
ret = binder_thread_write(proc, thread,
bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
if (bwr.read_size > 0) { //此時讀快取有資料【見小節3.3.3】
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
if (!list_empty(&proc->todo)) //程序todo佇列不為空,則喚醒該佇列中的執行緒
wake_up_interruptible(&proc->wait);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { //將核心資料bwr拷貝到使用者空間ubuf
ret = -EFAULT;
goto out;
}
out:
return ret;
}
3.3.2 binder_thread_write
[-> kernel/drivers/android/binder.c]
static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) {
uint32_t cmd;
struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed; //ptr指向小節3.2.3中bwr中write_buffer的data.
void __user *end = buffer + size;
while (ptr < end && thread->return_error == BR_OK) {
get_user(cmd, (uint32_t __user *)ptr); //獲取BC_REQUEST_DEATH_NOTIFICATION
ptr += sizeof(uint32_t);
switch (cmd) {
case BC_REQUEST_DEATH_NOTIFICATION:{ //註冊死亡通知
uint32_t target;
void __user *cookie;
struct binder_ref *ref;
struct binder_ref_death *death;
get_user(target, (uint32_t __user *)ptr); //獲取target
ptr += sizeof(uint32_t);
get_user(cookie, (void __user * __user *)ptr); //獲取death
ptr += sizeof(void *);
ref = binder_get_ref(proc, target); //拿到目標服務的binder_ref
if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
if (ref->death) {
break; //已設定死亡通知
}
death = kzalloc(sizeof(*death), GFP_KERNEL);
INIT_LIST_HEAD(&death->work.entry);
death->cookie = cookie;
ref->death = death;
if (ref->node->proc == NULL) { //當目標binder服務所在程序已死,則傳送死亡通知
ref->death->work.type = BINDER_WORK_DEAD_BINDER;
//當前執行緒為binder執行緒,則直接新增到當前執行緒的todo佇列. 接下來,進入[小節3.2.6]
if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
list_add_tail(&ref->death->work.entry, &thread->todo);
} else {
list_add_tail(&ref->death->work.entry, &proc->todo);
wake_up_interruptible(&proc->wait);
}
}
} else {
...
}
} break;
case ...;
}
*consumed = ptr - buffer;
} }
此方法中的proc, thread都是指當前servicemanager程序的資訊. 此時TODO佇列有資料,則進入binder_thread_read.
那麼哪些場景會向佇列增加BINDER_WORK_DEAD_BINDER事務呢? 那就是當binder所在程序死亡後,會呼叫binder_release方法, 然後呼叫binder_node_release.這個過程便會發出死亡通知的回撥.
3.3.3 binder_thread_read
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
...
//只有當前執行緒todo佇列為空,並且transaction_stack也為空,才會開始處於當前程序的事務
if (wait_for_proc_work) {
...
ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
} else {
...
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}
binder_lock(__func__); //加鎖
if (wait_for_proc_work)
proc->ready_threads--; //空閒的binder執行緒減1
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//從todo佇列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER
if (!list_empty(&thread->todo)) {
w = list_first_entry(&thread->todo, struct binder_work,
entry);
} else if (!list_empty(&proc->todo) && wait_for_proc_work) {
w = list_first_entry(&proc->todo, struct binder_work,
entry);
}
switch (w->type) {
case BINDER_WORK_DEAD_BINDER: {
struct binder_ref_death *death;
uint32_t cmd;
death = container_of(w, struct binder_ref_death, work);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
...
else
cmd = BR_DEAD_BINDER; //進入此分支
put_user(cmd, (uint32_t __user *)ptr);//拷貝到使用者空間[見小節3.3.4]
ptr += sizeof(uint32_t);
//此處的cookie是前面傳遞的svcinfo_death
put_user(death->cookie, (binder_uintptr_t __user *)ptr);
ptr += sizeof(binder_uintptr_t);
if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
...
} else
list_move(&w->entry, &proc->delivered_death);
if (cmd == BR_DEAD_BINDER)
goto done;
} break;
}
}
...
return 0;
}
將命令BR_DEAD_BINDER寫到使用者空間, 此處的cookie是前面傳遞的svcinfo_death. 當binder_loop下一次 執行binder_parse的過程便會處理該訊息。
3.3.4 binder_parse
[-> servicemanager/binder.c]
int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) {
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
switch(cmd) {
case BR_DEAD_BINDER: {
struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;
ptr += sizeof(binder_uintptr_t);
// binder死亡訊息【見小節3.3.5】
death->func(bs, death->ptr);
break;
}
...
}
}
return r;
}
由小節3.2的 si->death.func = (void*) svcinfo_death; 可知此處 death->func便是執行svcinfo_death()方法.
3.3.5 svcinfo_death
[-> service_manager.c]
void svcinfo_death(struct binder_state *bs, void *ptr) {
struct svcinfo *si = (struct svcinfo* ) ptr;
if (si->handle) {
binder_release(bs, si->handle);
si->handle = 0;
}
}
3.3.6 binder_release
[-> service_manager.c]
void binder_release(struct binder_state *bs, uint32_t target) {
uint32_t cmd[2];
cmd[0] = BC_RELEASE;
cmd[1] = target;
binder_write(bs, cmd, sizeof(cmd));
}
向Binder Driver寫入BC_RELEASE命令, 最終進入Binder Driver後執行binder_dec_ref(ref, 1)來減少binder node的引用.
3.4 binder_send_reply
[-> servicemanager/binder.c]
void binder_send_reply(struct binder_state *bs, struct binder_io *reply, binder_uintptr_t buffer_to_free, int status) {
struct {
uint32_t cmd_free;
binder_uintptr_t buffer;
uint32_t cmd_reply;
struct binder_transaction_data txn;
} __attribute__((packed)) data;
data.cmd_free = BC_FREE_BUFFER; //free buffer命令
data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY; // reply命令
data.txn.target.ptr = 0;
data.txn.cookie = 0;
data.txn.code = 0;
if (status) {
data.txn.flags = TF_STATUS_