Binder機制情景分析之C服務應用
一. 概述
這裡只講下binder的實現原理,不牽扯到android的java層是如何呼叫;
涉及到的會有ServiceManager
,led_control_server
和test_client
的程式碼,這些都是用c寫的.其中led_control_server
和test_client
是
仿照bctest.c
寫的; 在linux平臺下執行binder更容易分析binder機制實現的原理(可以增加大量的log,進行分析);
在Linux執行時.先執行ServiceManager
,再執行led_control_server
最後執行test_client
;
1.1 Binder通訊模型
Binder通訊採用C/S架構,從元件視角來說,包含Client、Server、ServiceManager以及binder驅動,其中ServiceManager
1.2 執行環境
本文中的程式碼執行環境是在imx6ul上跑的,執行的是linux系統,核心版本4.10(非android環境分析);
1.3 文章程式碼
文章所有程式碼已上傳
二. ServiceManager
涉及到的原始碼地址:
frameworks/native/cmds/servicemanager/sevice_manager.c
frameworks/native/cmds/servicemanager/binder.c
frameworks/native/cmds/servicemanager/bctest.c
ServiceManager
相當於binder通訊過程中的守護程序,本身也是個binder服務、好比一個root管理員一樣;
主要功能是查詢和註冊服務;接下來結合程式碼從main開始分析下serviceManager的服務過程;
2.1 main
原始碼中的sevice_manager.c
中主函式中使用了selinux
,為了在我板子的linux環境中執行,把這些程式碼遮蔽,刪減後如下:
int main(int argc, char **argv) { struct binder_state *bs; bs = binder_open(128*1024); ① if (!bs) { ALOGE("failed to open binder driver\n"); return -1; } if (binder_become_context_manager(bs)) { ② ALOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = BINDER_SERVICE_MANAGER; binder_loop(bs, svcmgr_handler); ③ return 0; }
①: 開啟binder驅動(詳見2.2.1)
②: 註冊為管理員(詳見2.2.2)
③: 進入迴圈,處理訊息(詳見2.2.3)
從主函式的啟動流程就能看出sevice_manager
的工作流程並不是特別複雜;
其實client
和server
的啟動流程和manager
的啟動類似,後面再詳細分析;
2.2 binder_open
struct binder_state *binder_open(size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
bs->fd = open("/dev/binder", O_RDWR); ①
if (bs->fd < 0) {
fprintf(stderr,"binder: cannot open device (%s)\n",
strerror(errno));
goto fail_open;
}
if ((ioctl(bs->fd, BINDER_VERSION, &vers) == -1) || ②
(vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) {
fprintf(stderr, "binder: driver version differs from user space\n");
goto fail_open;
}
bs->mapsize = mapsize;
bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); ③
if (bs->mapped == MAP_FAILED) {
fprintf(stderr,"binder: cannot map device (%s)\n",
strerror(errno));
goto fail_map;
}
return bs;
fail_map:
close(bs->fd);
fail_open:
free(bs);
return NULL;
}
①: 開啟binder裝置
②: 通過ioctl獲取binder版本號
③: mmp記憶體對映
這裡說明下為什麼binder驅動是用ioctl來操作,是因為ioctl可以同時進行讀和寫操作;
2.2 binder_become_context_manager
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
還是通過ioctl
請求型別BINDER_SET_CONTEXT_MGR
註冊成manager;
2.3 binder_loop
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t)); ①
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); ②
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); ③
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
①: 寫入命令BC_ENTER_LOOPER
通知驅動該執行緒已經進入主迴圈,可以接收資料;
②: 先讀一次資料,因為剛才寫過一次;
③: 然後解析讀出來的資料(詳見2.2.4);
binder_loop函式的主要流程如下:
2.4 binder_parse
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
#if TRACE
fprintf(stderr,"%s:\n", cmd_name(cmd));
#endif
switch(cmd) {
case BR_NOOP:
break;
case BR_TRANSACTION_COMPLETE:
/* check服務 */
break;
case BR_INCREFS:
case BR_ACQUIRE:
case BR_RELEASE:
case BR_DECREFS:
#if TRACE
fprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *)));
#endif
ptr += sizeof(struct binder_ptr_cookie);
break;
case BR_SPAWN_LOOPER: {
/* create new thread */
//if (fork() == 0) {
//}
pthread_t thread;
struct binder_thread_desc btd;
btd.bs = bs;
btd.func = func;
pthread_create(&thread, NULL, binder_thread_routine, &btd);
/* in new thread: ioctl(BC_ENTER_LOOPER), enter binder_looper */
break;
}
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: txn too small!\n");
return -1;
}
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
bio_init(&reply, rdata, sizeof(rdata), 4); ①
bio_init_from_txn(&msg, txn);
res = func(bs, txn, &msg, &reply); ②
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res); ③
}
ptr += sizeof(*txn);
break;
}
case BR_REPLY: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: reply too small!\n");
return -1;
}
binder_dump_txn(txn);
if (bio) {
bio_init_from_txn(bio, txn);
bio = 0;
} else {
/* todo FREE BUFFER */
}
ptr += sizeof(*txn);
r = 0;
break;
}
case BR_DEAD_BINDER: {
struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr;
ptr += sizeof(binder_uintptr_t);
death->func(bs, death->ptr);
break;
}
case BR_FAILED_REPLY:
r = -1;
break;
case BR_DEAD_REPLY:
r = -1;
break;
default:
ALOGE("parse: OOPS %d\n", cmd);
return -1;
}
}
return r;
}
①: 按照一定的格式初始化rdata資料,請注意這裡rdata是在使用者空間建立的buf;
②: 呼叫設定進來的處理函式svcmgr_handler
(詳見2.2.5);
③: 傳送回覆資訊;
這個函式我們只重點關注下BR_TRANSACTION
其他的命令含義可以參考表格A;
2.5 svcmgr_handler
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
//ALOGI("target=%x code=%d pid=%d uid=%d\n",
// txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid);
if (txn->target.handle != svcmgr_handle)
return -1;
if (txn->code == PING_TRANSACTION)
return 0;
// Equivalent to Parcel::enforceInterface(), reading the RPC
// header with the strict mode policy mask and the interface name.
// Note that we ignore the strict_policy and don't propagate it
// further (since we do no outbound RPCs anyway).
strict_policy = bio_get_uint32(msg); ①
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
if ((len != (sizeof(svcmgr_id) / 2)) || ②
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
fprintf(stderr,"invalid id %s\n", str8(s, len));
return -1;
}
switch(txn->code) { ③
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid); ④
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
if (do_add_service(bs, s, len, handle, txn->sender_euid, ⑤
allow_isolated, txn->sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
if (!svc_can_list(txn->sender_pid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
txn->sender_euid);
return -1;
}
si = svclist;
while ((n-- > 0) && si) ⑥
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %d\n", txn->code);
return -1;
}
bio_put_uint32(reply, 0);
return 0;
}
①: 獲取幀頭資料,一般為0,因為傳送方傳送資料時都會在資料最前方填充4個位元組0資料(分配資料空間的最小單位4位元組);
②: 對比svcmgr_id
是否和我們原來定義相同#define SVC_MGR_NAME "linux.os.ServiceManager"
(我改寫了);
③: 根據code
做對應的事情,就想到與根據編碼去執行對應的fun(client請求服務後去執行服務,service也是根據不同的code來執行。接下來會舉例說明);、
④: 從服務名在server連結串列中查詢對應的服務,並返回handle(詳見2.2.6);
⑤: 新增服務,一般都是service發起的請求。將handle和服務名新增到服務連結串列中(這裡的handle是由binder驅動分配);
⑥: 查詢server_manager中連結串列中第n
個服務的名字(該數值由查詢端決定);
2.6 do_find_service
uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
struct svcinfo *si;
if (!svc_can_find(s, len, spid)) { ①
ALOGE("find_service('%s') uid=%d - PERMISSION DENIED\n",
str8(s, len), uid);
return 0;
}
si = find_svc(s, len); ②
//ALOGI("check_service('%s') handle = %x\n", str8(s, len), si ? si->handle : 0);
if (si && si->handle) {
if (!si->allow_isolated) { ③
// If this service doesn't allow access from isolated processes,
// then check the uid to see if it is isolated.
uid_t appid = uid % AID_USER;
if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
return 0;
}
}
return si->handle; ④
} else {
return 0;
}
}
①: 檢測呼叫程序是否有許可權請求服務(這裡用selinux管理許可權,為了讓程式碼可以方便允許,這裡面的程式碼有做刪減);
②: 遍歷server_manager服務連結串列;
③: 如果binder服務不允許服務從沙箱中訪問,則執行下面檢查;
④: 返回查詢到handle;
do_find_service
函式主要工作是搜尋服務連結串列,返回查詢到的服務
2.7 do_add_service
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
//ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,
// allow_isolated ? "allow_isolated" : "!allow_isolated", uid);
if (!handle || (len == 0) || (len > 127))
return -1;
if (!svc_can_register(s, len, spid)) { ①
ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
str8(s, len), handle, uid);
return -1;
}
si = find_svc(s, len); ②
if (si) {
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
str8(s, len), handle, uid);
svcinfo_death(bs, si);
}
si->handle = handle;
} else { ③
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
str8(s, len), handle, uid);
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
si->next = svclist;
svclist = si;
}
ALOGI("add_service('%s'), handle = %d\n", str8(s, len), handle);
binder_acquire(bs, handle); ④
binder_link_to_death(bs, handle, &si->death); ⑤
return 0;
}
①: 判斷請求程序是否有許可權註冊服務;
②: 查詢ServiceManager的服務連結串列中是否已經註冊了該服務,如果有則通知驅動殺死原先的binder服務,然後更新最新的binder服務;
③: 如果原來沒有建立該binder服務,則進行一系列的賦值,再插入到服務連結串列的表頭;
④: 增加binder服務的引用計數;
⑤: 告訴驅動接收服務的死亡通知;
2.8 呼叫時序圖
從上面分析,可以知道ServiceManager
的主要工作流程如下:
三. led_control_server
3.1 main
int main(int argc, char **argv)
{
int fd;
struct binder_state *bs;
uint32_t svcmgr = BINDER_SERVICE_MANAGER;
uint32_t handle;
int ret;
struct register_server led_control[3] = { ①
[0] = {
.code = 1,
.fun = led_on
} ,
[1] = {
.code = 2,
.fun = led_off
}
};
bs = binder_open(128*1024); ②
if (!bs) {
ALOGE("failed to open binder driver\n");
return -1;
}
ret = svcmgr_publish(bs, svcmgr, LED_CONTROL_SERVER_NAME, led_control); ③
if (ret) {
ALOGE("failed to publish %s service\n", LED_CONTROL_SERVER_NAME);
return -1;
}
binder_set_maxthreads(bs, 10); ④
binder_loop(bs, led_control_server_handler); ⑤
return 0;
}
①:led_control_server
提供的服務函式;
②: 初始化binder元件( 詳見2.2);
③: 註冊服務,svcmgr
是傳送的目標,LED_CONTROL_SERVER_NAME
註冊的服務名,led_control
註冊的binder實體;
④: 設定建立執行緒最大數(詳見3.5);
⑤: 進入執行緒迴圈(詳見2.3);
3.2 svcmgr_publish
int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr)
{
int status;
unsigned iodata[512/4];
struct binder_io msg, reply;
bio_init(&msg, iodata, sizeof(iodata), 4); ①
bio_put_uint32(&msg, 0); // strict mode header
bio_put_string16_x(&msg, SVC_MGR_NAME);
bio_put_string16_x(&msg, name);
bio_put_obj(&msg, ptr);
if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)) ②
return -1;
status = bio_get_uint32(&reply); ③
binder_done(bs, &msg, &reply); ④
return status;
}
①: 初始化使用者空間的資料iodata,設定了四個位元組的offs,接著按一定格式往buf裡面填充資料;
②: 呼叫ServiceManager
服務的SVC_MGR_ADD_SERVICE
功能;
③: 獲取ServiceManager
回覆資料,成功返回0
;
④: 結束註冊過程,釋放核心中剛才互動分配的buf;
3.2.1 bio_init
void bio_init(struct binder_io *bio, void *data,
size_t maxdata, size_t maxoffs)
{
size_t n = maxoffs * sizeof(size_t);
if (n > maxdata) {
bio->flags = BIO_F_OVERFLOW;
bio->data_avail = 0;
bio->offs_avail = 0;
return;
}
bio->data = bio->data0 = (char *) data + n; ①
bio->offs = bio->offs0 = data; ②
bio->data_avail = maxdata - n; ③
bio->offs_avail = maxoffs; ④
bio->flags = 0; ⑤
}
①: 根據傳進來的引數,留下一定長度的offs資料空間, data指標則從data + n
開始;
②: offs指標則從data
開始,則offs可使用的資料空間只有n
個位元組;
③: 可使用的data空間計數;
④: 可使用的offs空間計數;
⑤: 清除buf的flag;
init後此時buf空間的分配情況如下圖:
3.2.2 bio_put_uint32
void bio_put_uint32(struct binder_io *bio, uint32_t n)
{
uint32_t *ptr = bio_alloc(bio, sizeof(n));
if (ptr)
*ptr = n;
}
這個函式往buf裡面填充一個uint32的資料,這個資料的最小單位為4個位元組;
前面svcmgr_publish
呼叫bio_put_uint32(&msg, 0);,實質buf中的資料是00 00 00 00
;
3.2.3 bio_alloc
static void *bio_alloc(struct binder_io *bio, size_t size)
{
size = (size + 3) & (~3);
if (size > bio->data_avail) {
bio->flags |= BIO_F_OVERFLOW;
return NULL;
} else {
void *ptr = bio->data;
bio->data += size;
bio->data_avail -= size;
return ptr;
}
}
這個函式分配的資料寬度為4的倍數,先判斷當前可使用的資料寬度是否小於待分配的寬度;
如果小於則置標誌BIO_F_OVERFLOW
否則分配資料,並對data
往後偏移size
個位元組,可使用資料寬度data_avail
減去size
個位元組;
3.2.4 bio_put_string16_x
void bio_put_string16_x(struct binder_io *bio, const char *_str)
{
unsigned char *str = (unsigned char*) _str;
size_t len;
uint16_t *ptr;
if (!str) { ①
bio_put_uint32(bio, 0xffffffff);
return;
}
len = strlen(_str);
if (len >= (MAX_BIO_SIZE / sizeof(uint16_t))) {
bio_put_uint32(bio, 0xffffffff);
return;
}
/* Note: The payload will carry 32bit size instead of size_t */
bio_put_uint32(bio, len);
ptr = bio_alloc(bio, (len + 1) * sizeof(uint16_t));
if (!ptr)
return;
while (*str) ②
*ptr++ = *str++;
*ptr++ = 0;
}
①: 這裡到bio_alloc
前都是為了計算和判斷自己串的長度再填充到buf中;
②: 填充字串到buf中,一個字元佔兩個位元組,注意uint16_t *ptr;
;
3.2.5 bio_put_obj
void bio_put_obj(struct binder_io *bio, void *ptr)
{
struct flat_binder_object *obj;
obj = bio_alloc_obj(bio); ①
if (!obj)
return;
obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
obj->type = BINDER_TYPE_BINDER; ②
obj->binder = (uintptr_t)ptr; ③
obj->cookie = 0;
}
struct flat_binder_object {
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
__u32 type;
__u32 flags;
union {
binder_uintptr_t binder;
/* WARNING: DO NOT EDIT, AUTO-GENERATED CODE - SEE TOP FOR INSTRUCTIONS */
__u32 handle;
};
binder_uintptr_t cookie;
};
①: 分配一個flat_binder_object
大小的空間(詳見3.2.6);
②: type的型別為BINDER_TYPE_BINDER
時則type傳入的是binder實體,一般是服務端註冊服務時傳入;
type的型別為BINDER_TYPE_HANDLE
時則type傳入的為handle,一般由客戶端請求服務時;
③:obj->binder
值,跟隨type改變;
3.2.6 bio_alloc_obj
static struct flat_binder_object *bio_alloc_obj(struct binder_io *bio)
{
struct flat_binder_object *obj;
obj = bio_alloc(bio, sizeof(*obj)); ①
if (obj && bio->offs_avail) {
bio->offs_avail--;
*bio->offs++ = ((char*) obj) - ((char*) bio->data0); ②
return obj;
}
bio->flags |= BIO_F_OVERFLOW;
return NULL;
}
①: 在data後分配struct flat_binder_object
長度的空間;
②: bio->offs空間記下此時插入obj,相對於data0的偏移值;
看到這終於知道offs是幹嘛的了,原來是用來記錄資料中是否有obj型別的資料;
3.2.7 完整資料格式圖
綜上分析,傳輸一次完整的資料的格式如下:
3.3 binder_call
int binder_call(struct binder_state *bs,
struct binder_io *msg, struct binder_io *reply,
uint32_t target, uint32_t code)
{
int res;
struct binder_write_read bwr;
struct {
uint32_t cmd;
struct binder_transaction_data txn;
} __attribute__((packed)) writebuf;
unsigned readbuf[32];
if (msg->flags & BIO_F_OVERFLOW) {
fprintf(stderr,"binder: txn buffer overflow\n");
goto fail;
}
writebuf.cmd = BC_TRANSACTION; // binder call transaction
writebuf.txn.target.handle = target; ①
writebuf.txn.code = code; ②
writebuf.txn.flags = 0;
writebuf.txn.data_size = msg->data - msg->data0; ③
writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);
writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;
writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;
bwr.write_size = sizeof(writebuf); ④
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) &writebuf;
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); ⑤
if (res < 0) {
fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));
goto fail;
}
res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0); ⑥
if (res == 0) return 0;
if (res < 0) goto fail;
}
fail:
memset(reply, 0, sizeof(*reply));
reply->flags |= BIO_F_IOERROR;
return -1;
}
①: 這個target就是我們這次請求服務的目標,即ServiceManager;
②: code是我們請求服務的功能碼,由服務端提供;
③: 把binder_io
資料轉化成binder_transaction_data
資料;
④: 驅動進行讀寫是根據這個size來的,分析驅動的時候再詳細分析;
⑤: 進行一次讀寫;
⑥: 解析傳送的後返回的資料,判斷是否註冊成功;
3.4 binder_done
void binder_done(struct binder_state *bs,
struct binder_io *msg,
struct binder_io *reply)
{
struct {
uint32_t cmd;
uintptr_t buffer;
} __attribute__((packed)) data;
if (reply->flags & BIO_F_SHARED) {
data.cmd = BC_FREE_BUFFER;
data.buffer = (uintptr_t) reply->data0;
binder_write(bs, &data, sizeof(data));
reply->flags = 0;
}
}
這個函式比較簡單傳送BC_FREE_BUFFER
命令給驅動,讓驅動釋放核心態由剛才互動分配的buf;
3.5 binder_set_maxthreads
void binder_set_maxthreads(struct binder_state *bs, int threads)
{
ioctl(bs->fd, BINDER_SET_MAX_THREADS, &threads);
}
這裡主要呼叫ioctl
函式寫入命令BINDER_SET_MAX_THREADS
進行設定最大執行緒數;
3.6 呼叫時序圖
led_control_server主要提供led的控制服務,具體的流程如下:
四. test_client
4.1 main
int main(int argc, char **argv)
{
struct binder_state *bs;
uint32_t svcmgr = BINDER_SERVICE_MANAGER;
unsigned int g_led_control_handle;
if (argc < 3) {
ALOGE("Usage:\n");
ALOGE("%s led <on|off>\n", argv[0]);
return -1;
}
bs = binder_open(128*1024); ①
if (!bs) {
ALOGE("failed to open binder driver\n");
return -1;
}
g_led_control_handle = svcmgr_lookup(bs, svcmgr, LED_CONTROL_SERVER_NAME); ②
if (!g_led_control_handle) {
ALOGE( "failed to get led control service\n");
return -1;
}
ALOGI("Handle for led control service = %d\n", g_led_control_handle);
if (!strcmp(argv[1], "led")) {
if (!strcmp(argv[2], "on")) {
if (interface_led_on(bs, g_led_control_handle, 2) == 0) { ③
ALOGI("led was on\n");
}
} else if (!strcmp(argv[2], "off")) {
if (interface_led_off(bs, g_led_control_handle, 2) == 0) {
ALOGI("led was off\n");
}
}
}
binder_release(bs, g_led_control_handle); ④
return 0;
}
①: 開啟binder裝置(詳見2.2);
②: 根據名字獲取led控制服務;
③: 根據獲取到的handle,呼叫led控制服務(詳見4.3);
④: 釋放服務;
client的流程也很簡單,按步驟1.2.3.4讀下來就是了;
4.2 svcmgr_lookup
uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name)
{
uint32_t handle;
unsigned iodata[512/4];
struct binder_io msg, reply;
bio_init(&msg, iodata, sizeof(iodata), 4); ①
bio_put_uint32(&msg, 0); // strict mode header
bio_put_string16_x(&msg, SVC_MGR_NAME);
bio_put_string16_x(&msg, name);
if (binder_call(bs, &msg, &reply, target, SVC_MGR_GET_SERVICE)) ②
return 0;
handle = bio_get_ref(&reply); ③
if (handle)
binder_acquire(bs, handle); ④
binder_done(bs, &msg, &reply); ⑤
return handle;
}
①: 因為是請求服務,所以這裡不用新增binder實體資料,具體的參考3.2,這裡就不重複解釋了;
②: 向target程序(ServiceManager)請求獲取led_control服務(詳細參考3.3);
③: 從ServiceManager返回的資料buf中獲取led_control服務的handle;
④: 增加該handle的引用計數;
⑤: 釋放核心空間buf(詳3.4);
4.2.1 bio_get_ref
uint32_t bio_get_ref(struct binder_io *bio)
{
struct flat_binder_object *obj;
obj = _bio_get_obj(bio); ①
if (!obj)
return 0;
if (obj->type == BINDER_TYPE_HANDLE) ②
return obj->handle;
return 0;
}
①: 把bio的資料轉化成flat_binder_object格式;
②: 判斷binder資料型別是否為引用,是則返回獲取到的handle;
4.2.2 _bio_get_obj
static struct flat_binder_object *_bio_get_obj(struct binder_io *bio)
{
size_t n;
size_t off = bio->data - bio->data0; ①
/* TODO: be smarter about this? */
for (n = 0; n < bio->offs_avail; n++) {
if (bio->offs[n] == off)
return bio_get(bio, sizeof(struct flat_binder_object)); ②
}
bio->data_avail = 0;
bio->flags |= BIO_F_OVERFLOW;
return NULL;
}
①: 一般情況下該值都為0,因為在reply時獲取ServiceManager傳來的資料,bio->data和bio->data都指向同一個地址;
②: 獲取到struct flat_binder_object
資料的頭指標;
從ServiceManager傳來的資料是struct flat_binder_object
的資料,格式如下:
4.3 interface_led_on
int interface_led_on(struct binder_state *bs, unsigned int handle, unsigned char led_enum)
{
unsigned iodata[512/4];
struct binder_io msg, reply;
int ret = -1;
int exception;
bio_init(&msg, iodata, sizeof(iodata), 4);
bio_put_uint32(&msg, 0); // strict mode header
bio_put_uint32(&msg, led_enum);
if (binder_call(bs, &msg, &reply, handle, LED_CONTROL_ON))
return ret;
exception = bio_get_uint32(&reply);
if (exception == 0)
ret = bio_get_uint32(&reply);
binder_done(bs, &msg, &reply);
return ret;
}
這個流程和前面svcmgr_lookup
的請求服務差不多,只是最後是獲取led_control_server
的返回值.
注意這裡為什麼獲取了兩次uint32
型別的資料,這是因為服務方在回覆資料的時候添加了頭幀,這個是可以調節的,非規則;
4.4 binder_release
void binder_release(struct binder_state *bs, uint32_t target)
{
uint32_t cmd[2];
cmd[0] = BC_RELEASE;
cmd[1] = target;
binder_write(bs, cmd, sizeof(cmd));
}
通知驅動層減小對target
程序的引用,結合驅動講解就更能明白了;
4.5 呼叫時序圖
test_client的呼叫時序如下,過程和led_control_server
的呼叫過程相識:
A: 表BR_含義
BR個人理解是縮寫為binder reply
訊息 | 含義 | 引數 |
---|---|---|
BR_ERROR | 發生內部錯誤(如記憶體分配失敗) | --- |
BR_OK BR_NOOP |
操作完成 | --- |
BR_SPAWN_LOOPER | 該訊息用於接收方執行緒池管理。當驅動發現接收方所有 執行緒都處於忙碌狀態且執行緒池裡的執行緒總數沒有超過 BINDER_SET_MAX_THREADS設定的最大執行緒數時, 向接收方傳送該命令要求建立更多執行緒以備接收資料。 |
--- |
BR_TRANSACTION | 對應傳送方的BC_TRANSACTION | binder_transaction_data |
BR_REPLY | 對應傳送方BC_REPLY的回覆 | binder_transaction_data |
BR_ACQUIRE_RESULT BR_FINISHED |
未使用 | --- |
BR_DEAD_REPLY | 互動時向驅動傳送binder呼叫,如果對方已經死亡,則 驅動迴應此命令 |
--- |
BR_TRANSACTION_COMPLETE | 傳送方通過BC_TRANSACTION或BC_REPLY傳送 完一個數據包後,都能收到該訊息做為成功傳送的反饋。 這和BR_REPLY不一樣,是驅動告知傳送方已經發送成 功,而不是Server端返回請求資料。所以不管 同步還是非同步互動接收方都能獲得本訊息。 |
--- |
BR_INCREFS BR_ACQUIRE BR_RELEASE BR_DECREFS |
這一組訊息用於管理強/弱指標的引用計數。只有 提供Binder實體的程序才能收到這組訊息。 |
binder_uintptr_t binder:Binder實體在使用者空間中的指標 binder_uintptr_t cookie:與該實體相關的附加資料 |
BR_DEAD_BINDER |
向獲得Binder引用的程序傳送Binder實體 死亡通知書;收到死亡通知書的程序接下 來會返回BC_DEAD_BINDER_DONE做確認。 |
--- |
BR_CLEAR_DEATH_NOTIFICATION_DONE | 迴應命令BC_REQUEST_DEATH_NOTIFICATION | --- |
BR_FAILED_REPLY | 如果傳送非法引用號則返回該訊息 | --- |
B: 表BC_含義
BC個人理解是縮寫為binder call or cmd
訊息 | 含義 | 引數 |
---|---|---|
BC_TRANSACTION BC_REPLY |
BC_TRANSACTION用於Client向Server傳送請求資料; BC_REPLY用於Server向Client傳送回覆(應答)資料。 其後面緊接著一個binder_transaction_data結構體表明要寫 入的資料。 |
struct binder_transaction_data |
BC_ACQUIRE_RESULT BC_ATTEMPT_ACQUIRE |
未使用 | --- |
BC_FREE_BUFFER | 請求驅動釋放調剛在核心空間建立用來儲存使用者空間資料的記憶體塊 | --- |
BC_INCREFS BC_ACQUIRE BC_RELEASE BC_DECREFS |
這組命令增加或減少Binder的引用計數,用以實現強指標或 弱指標的功能。 |
--- |
BC_INCREFS_DONE BC_ACQUIRE_DONE |
第一次增加Binder實體引用計數時,驅動向Binder 實體所在的程序傳送BR_INCREFS, BR_ACQUIRE訊息; Binder實體所在的程序處理完畢回饋BC_INCREFS_DONE, BC_ACQUIRE_DONE |
--- |
BC_REGISTER_LOOPER BC_ENTER_LOOPER BC_EXIT_LOOPER |
這組命令同BINDER_SET_MAX_THREADS一道實現Binder驅 動對接收方執行緒池管理。BC_REGISTER_LOOPER通知驅動執行緒 池中一個執行緒已經建立了;BC_ENTER_LOOPER通知驅動該執行緒 已經進入主迴圈,可以接收資料;BC_EXIT_LOOPER通知驅動 該執行緒退出主迴圈,不再接收資料。 |
--- |
BC_REQUEST_DEATH_NOTIFICATION | 獲得Binder引用的程序通過該命令要求驅動在Binder實體銷燬得到 通知。雖說強指標可以確保只要有引用就不會銷燬實體,但這畢竟 是個跨程序的引用,誰也無法保證實體由於所在的Server關閉Binder 驅動或異常退出而消失,引用者能做的是要求Server在此刻給出通知。 |
--- |
BC_DEAD_BINDER_DONE | 收到實體死亡通知書的程序在刪除引用後用本命令告知驅動。 | --- |
參考
表格參考部落格: