1. 程式人生 > >詳解slab機制

詳解slab機制

目前有很多講slab的文章,要麼是純講原理畫一堆圖結合原始碼不深導致理解困難,要麼是純程式碼註釋導致理解更困難,我在猛攻了一週時間後,細緻總結一下slab,爭取從原理到原始碼都能細緻的理解到並立刻達到清楚的使用。

一、slab分配器概述:

有了夥伴系統buddy,我們可以以頁為單位獲取連續的實體記憶體了,即4K為單位的獲取,但如果需要頻繁的獲取/釋放並不大的連續實體記憶體怎麼辦,如幾十位元組幾百位元組的獲取/釋放,這樣的話用buddy就不太合適了,這就引出了slab

比如我需要一個100位元組的連續實體記憶體,那麼核心slab分配器會給我提供一個相應大小的連續實體記憶體單元,為128位元組大小(

不會是整好100位元組,而是這個檔的一個對齊值,如100位元組對應128位元組,30位元組對應32位元組,60位元組對應64位元組),這個實體記憶體實際上還是從夥伴系統獲取的物理頁;當我不再需要這個記憶體時應該釋放它,釋放它並非把它歸還給夥伴系統,而是歸還給slab分配器,這樣等再需要獲取時無需再從夥伴系統申請,這也就是為什麼slab分配器往往會把最近釋放的記憶體(即所謂“熱”)分配給申請者,這樣效率是比較高的。

二、建立一個slab

2.1、什麼叫建立slab

上面舉了申請100位元組連續實體記憶體的例子,還提到了實際分配的是128位元組記憶體,也就是實際上核心中slab分配器對不同長度記憶體是分檔的,其實這就是slab

分配器的一個基本原則,按申請的記憶體的大小分配相應長度的記憶體。

同時也說明一個事實,核心中一定應該有這樣的按不同長度slab記憶體單元,也就是說已經建立過這樣的記憶體塊,否則申請時怎能根據大小識別應該分配給怎樣大小的記憶體,這可以先參加kmalloc的實現,kmalloc->__do_kmalloc__do_kmalloc函式中的如下:

static __always_inline void *__do_kmalloc(size_t size, gfp_t flags,  void *caller)

{

         struct kmem_cache *cachep;

         void *ret;

         /*找一個合適大小的快取記憶體*/

         cachep = __find_general_cachep(size, flags);

         if (unlikely(ZERO_OR_NULL_PTR(cachep)))

                   return cachep;

         ret = __cache_alloc(cachep, flags, caller);

         trace_kmalloc((unsigned long) caller, ret,

                         size, cachep->buffer_size, flags);

         return ret;

}

加深的部分就是說,kmalloc申請的實體記憶體長度為引數size,它需要先根據這個長度找到相應的長度的快取,這個快取的概念是什麼馬上就要引出先彆著急,先看函式__find_general_cachep

static inline struct kmem_cache *__find_general_cachep(size_t size,  gfp_t gfpflags)

{

         struct cache_sizes *csizep = malloc_sizes;

#if DEBUG

         /* This happens if someone tries to call

          * kmem_cache_create(), or __kmalloc(), before

          * the generic caches are initialized.

          */

         BUG_ON(malloc_sizes[INDEX_AC].cs_cachep == NULL);

#endif

         if (!size)

                   return ZERO_SIZE_PTR;

    /*這是本函式唯一有用的地方: 尋找合適大小的cache_sizes*/

         while (size > csizep->cs_size)

                   csizep++;

         /*

          * Really subtle: The last entry with cs->cs_size==ULONG_MAX

          * has cs_{dma,}cachep==NULL. Thus no special case

          * for large kmalloc calls required.

          */

#ifdef CONFIG_ZONE_DMA

         if (unlikely(gfpflags & GFP_DMA))

                   return csizep->cs_dmacachep;

#endif

         return csizep->cs_cachep;

}

如上面加深的部分所示,這個函式唯一有用的部分就是這裡,csizep初始化成全域性變數malloc_sizes,根據全域性變數malloc_sizescs_size成員和size的大小比較,不斷後移malloc_sizes,現在就要看看malloc_sizes是怎麼回事:

struct cache_sizes malloc_sizes[] = {

#define CACHE(x) { .cs_size = (x) },

#include <linux/kmalloc_sizes.h>

         CACHE(ULONG_MAX)

#undef CACHE

};

觀察檔案linux/kmalloc_sizes.h的情況,篇幅太大這個檔案內容就不列了,裡面都是一堆的CACHE(X)的巨集宣告,根據裡邊的定製巨集情況(L1_CACHE_BYTES值為32KMALLOC_MAX_SIZE值為4194304),一共聲明瞭CACHE(32)CACHE(64)CACHE(96)CACHE(128)CACHE(192)CACHE(256)CACHE(512)CACHE(1024)CACHE(2048)CACHE(4096)CACHE(8192)CACHE(16384)CACHE(32768)CACHE(65536)CACHE(131072)CACHE(262144)CACHE(524288)CACHE(1048576)CACHE(2097152)CACHE(4194304)和最後的CACHE(0xffffffff)共計21CACHE(X)的巨集宣告,結合結構型別struct cache_sizes,對於arm它實際上有兩個成員:

struct cache_sizes {

         size_t                        cs_size;

         struct kmem_cache         *cs_cachep;

#ifdef CONFIG_ZONE_DMA

         struct kmem_cache         *cs_dmacachep;

#endif

};

X86以外基本都沒有DMA必須在實體記憶體前16MB的限制,所以包括arm的很多體系結構都沒有CONFIG_ZONE_DMA,所以本結構實際上是兩個成員cs_sizecs_cachep,那麼這裡就比較清晰了,全域性變數malloc_sizes共有21個成員,每個成員都定義了cs_size值,從324194304加上0xffffffffcs_cachep都為NULL;其實這些值就是slab分配器的一個個按長度的分檔;

回到函式__find_general_cachep,已經很清晰了,全域性變數malloc_sizes的第0個成員開始,當申請的記憶體長度比該成員的檔次值cs_size大,就換下一個成員,直到比它小為止,仍然如申請100位元組的例子,在96位元組的分檔時還比申請長度小,在128位元組的分檔時就可以滿足了,這就是為什麼說申請100位元組實際獲取到的是128位元組的記憶體單元的原因。

回到函式__do_kmalloc,接下來呼叫的是__cache_alloc,將按照前面確定的記憶體分檔值給申請者分配一個相應值的記憶體,這說明,核心有能力給分配這樣的記憶體單元;

核心為什麼有能力建立這樣的記憶體單元?slab分配器並非一開始就能智慧的根據記憶體分檔值分配相應長度的記憶體的,它需要先建立一個這樣的“規則”式的東西,之後才可以根據這個“規則”分配相應長度的記憶體,看看前面的結構struct cache_sizes的定義,裡邊的成員cs_cachep,它的結構型別是struct kmem_cache      *,這個結構也是同樣是剛才提到的快取的概念,每種長度的slab分配都得通過它對應的cache分配,換句話說就是每種cache對應一種長度的slab分配,這裡順便能看看slab分配介面,一個是函式kmalloc一個是函式kmem_cache_allockmalloc的引數比較輕鬆,直接輸入自己想要的記憶體長度即可,由slab分配器去找應該是屬於哪個長度分檔的,然後由那個分檔的kmem_cache結構指標去分配相應長度記憶體,而kmem_cache_alloc就顯得比較“專業”,它不是輸入我要多少長度記憶體,而是直接以kmem_cache結構指標作為引數,直接指定我要這樣長度分檔的記憶體,稍微看看這兩個函式的呼叫情況就可以發現它們很快都是呼叫函式__cache_alloc,只是前面的這些不同而已。

比如現在有一個核心模組想要申請一種它自創的結構,這個結構是111位元組,並且它不想獲取128位元組記憶體就想獲取111位元組長度記憶體,那麼它需要在slab分配器中建立一個這樣的“規則”,這個規則規定slab分配器當按這種“規則”分配時要給我111位元組的記憶體,這個“規則”的建立方法就是呼叫函式kmem_cache_create

同樣,核心slab分配器之所以能夠預設的提供32-4194304共20種記憶體長度分檔,肯定也是需要建立這樣20個“規則”的,這是在初始化時建立的,由函式kmem_cache_init,先不要糾結kmem_cache_init,它裡邊有一些道理需要在理解slab分配器原理後才能更好的理解,先看kmem_cache_create:

2.2、建立slab的過程:

現在去看結構kmem_cache的各個成員定義是很模糊的,直接看函式原始碼:

struct kmem_cache *

kmem_cache_create (const char *name, size_t size, size_t align,

         unsigned long flags, void (*ctor)(void *))

{

         size_t left_over, slab_size, ralign;

         struct kmem_cache *cachep = NULL, *pc;

         gfp_t gfp;

         /*

          * Sanity checks... these are all serious usage bugs.

          */

/*引數檢查: 名字不能為NULL、不許在中斷中呼叫本函式(本函式可能睡眠)

獲取長度不得小於4位元組(CPU字長)、獲取長度不得大於最大值(1<<22 = 4MB)*/

         if (!name || in_interrupt() || (size < BYTES_PER_WORD) ||

             size > KMALLOC_MAX_SIZE) {

                   printk(KERN_ERR "%s: Early error in slab %s\n", __func__,

                                     name);

                   BUG();

         }

         /*

          * We use cache_chain_mutex to ensure a consistent view of

          * cpu_online_mask as well.  Please see cpuup_callback

          */

         if (slab_is_available()) {

                   get_online_cpus();

                   mutex_lock(&cache_chain_mutex);

         }

    /*一些檢查機制,無需關注*/

         list_for_each_entry(pc, &cache_chain, next) {

                   char tmp;

                   int res;

                   /*

                    * This happens when the module gets unloaded and doesn't

                    * destroy its slab cache and no-one else reuses the vmalloc

                    * area of the module.  Print a warning.

                    */

                   res = probe_kernel_address(pc->name, tmp);

                   if (res) {

                            printk(KERN_ERR

                                   "SLAB: cache with size %d has lost its name\n",

                                   pc->buffer_size);

                            continue;

                   }

                   if (!strcmp(pc->name, name)) {

                            printk(KERN_ERR

                                   "kmem_cache_create: duplicate cache %s\n", name);

                            dump_stack();

                            goto oops;

                   }

         }

#if DEBUG

         WARN_ON(strchr(name, ' '));  /* It confuses parsers */

#if FORCED_DEBUG

         /*

          * Enable redzoning and last user accounting, except for caches with

          * large objects, if the increased size would increase the object size

          * above the next power of two: caches with object sizes just above a

          * power of two have a significant amount of internal fragmentation.

          */

         if (size < 4096 || fls(size - 1) == fls(size-1 + REDZONE_ALIGN +

                                                        2 * sizeof(unsigned long long)))

                   flags |= SLAB_RED_ZONE | SLAB_STORE_USER;

         if (!(flags & SLAB_DESTROY_BY_RCU))

                   flags |= SLAB_POISON;

#endif

         if (flags & SLAB_DESTROY_BY_RCU)

                   BUG_ON(flags & SLAB_POISON);

#endif

         /*

          * Always checks flags, a caller might be expecting debug support which

          * isn't available.

          */

         BUG_ON(flags & ~CREATE_MASK);

         /*

          * Check that size is in terms of words.  This is needed to avoid

          * unaligned accesses for some archs when redzoning is used, and makes

          * sure any on-slab bufctl's are also correctly aligned.

          */

/*下面是一堆關於對齊的內容*/

         if (size & (BYTES_PER_WORD - 1)) {

                   size += (BYTES_PER_WORD - 1);

                   size &= ~(BYTES_PER_WORD - 1);

         }

         /* calculate the final buffer alignment: */

         /* 1) arch recommendation: can be overridden for debug */

         if (flags & SLAB_HWCACHE_ALIGN) {

                   /*

                    * Default alignment: as specified by the arch code.  Except if

                    * an object is really small, then squeeze multiple objects into

                    * one cacheline.

                    */

                   ralign = cache_line_size();

                   while (size <= ralign / 2)

                            ralign /= 2;

         } else {

                   ralign = BYTES_PER_WORD;

         }

         /*

          * Redzoning and user store require word alignment or possibly larger.

          * Note this will be overridden by architecture or caller mandated

          * alignment if either is greater than BYTES_PER_WORD.

          */

         if (flags & SLAB_STORE_USER)

                   ralign = BYTES_PER_WORD;

         if (flags & SLAB_RED_ZONE) {

                   ralign = REDZONE_ALIGN;

                   /* If redzoning, ensure that the second redzone is suitably

                    * aligned, by adjusting the object size accordingly. */

                   size += REDZONE_ALIGN - 1;

                   size &= ~(REDZONE_ALIGN - 1);

         }

         /* 2) arch mandated alignment */

         if (ralign < ARCH_SLAB_MINALIGN) {

                   ralign = ARCH_SLAB_MINALIGN;

         }

         /* 3) caller mandated alignment */

         if (ralign < align) {

                   ralign = align;

         }

         /* disable debug if necessary */

         if (ralign > __alignof__(unsigned long long))

                   flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);

         /*

          * 4) Store it.

          */

         align = ralign;

         if (slab_is_available())

                   gfp = GFP_KERNEL;

         else

                   gfp = GFP_NOWAIT;

         /* Get cache's description obj. */

/*cache_cache快取中分配一個kmem_cache新例項*/

         cachep = kmem_cache_zalloc(&cache_cache, gfp);

         if (!cachep)

                   goto oops;

#if DEBUG

         cachep->obj_size = size;

         /*

          * Both debugging options require word-alignment which is calculated

          * into align above.

          */

         if (flags & SLAB_RED_ZONE) {

                  /* add space for red zone words */

                   cachep->obj_offset += sizeof(unsigned long long);

                   size += 2 * sizeof(unsigned long long);

         }

         if (flags & SLAB_STORE_USER) {

                   /* user store requires one word storage behind the end of

                    * the real object. But if the second red zone needs to be

                    * aligned to 64 bits, we must allow that much space.

                    */

                   if (flags & SLAB_RED_ZONE)

                            size += REDZONE_ALIGN;

                   else

                            size += BYTES_PER_WORD;

         }

#if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC)

         if (size >= malloc_sizes[INDEX_L3 + 1].cs_size

             && cachep->obj_size > cache_line_size() && size < PAGE_SIZE) {

                   cachep->obj_offset += PAGE_SIZE - size;

                   size = PAGE_SIZE;

         }

#endif

#endif

         /*

          * Determine if the slab management is 'on' or 'off' slab.

          * (bootstrapping cannot cope with offslab caches so don't do

          * it too early on.)

          */

         /*確定slab管理物件的儲存方式:內建還是外接。通常,當物件大於等於512時,使用外接方式。初始化階段採用內建式(kmem_cache_init中建立兩個普通快取記憶體之後就把變數slab_early_init0)*/

         if ((size >= (PAGE_SIZE >> 3)) && !slab_early_init)

                   /*

                    * Size is large, assume best to place the slab management obj

                    * off-slab (should allow better packing of objs).

                    */

                   flags |= CFLGS_OFF_SLAB;

         size = ALIGN(size, align);

    /*計算碎片大小,計算slab由幾個頁面組成,同時計算每個slab中有多少物件*/

         left_over = calculate_slab_order(cachep, size, align, flags);

         if (!cachep->num) {

                   printk(KERN_ERR

                          "kmem_cache_create: couldn't create cache %s.\n", name);

                   kmem_cache_free(&cache_cache, cachep);

                   cachep = NULL;

                   goto oops;

         }

    /*計算slab管理物件的大小,包括struct slab物件和kmem_bufctl_t陣列  */

         slab_size = ALIGN(cachep->num * sizeof(kmem_bufctl_t)

                              + sizeof(struct slab), align);

         /*

          * If the slab has been placed off-slab, and we have enough space then

          * move it on-slab. This is at the expense of any extra colouring.

          */

         /*如果這是一個外接式slab,並且碎片大小大於slab管理物件的大小,則可將slab管理物件移到slab中,改造成一個內建式slab*/

         if (flags & CFLGS_OFF_SLAB && left_over >= slab_size) {

        /*去除外接標誌*/

                   flags &= ~CFLGS_OFF_SLAB;

        /*碎片可以減小了!*/

                   left_over -= slab_size;

         }

    /*對於實際的外接slab,無需對齊管理物件,恢復其對齊前長度*/

         if (flags & CFLGS_OFF_SLAB) {

                   /* really off slab. No need for manual alignment */

                   slab_size =

                       cachep->num * sizeof(kmem_bufctl_t) + sizeof(struct slab);

#ifdef CONFIG_PAGE_POISONING

                   /* If we're going to use the generic kernel_map_pages()

                    * poisoning, then it's going to smash the contents of

                    * the redzone and userword anyhow, so switch them off.

                    */

                   if (size % PAGE_SIZE == 0 && flags & SLAB_POISON)

                            flags &= ~(SLAB_RED_ZONE | SLAB_STORE_USER);

#endif

         }

    /*著色塊單位,為32位元組*/

         cachep->colour_off = cache_line_size();

         /* Offset must be a multiple of the alignment. */

    /*著色塊單位必須是對齊單位的整數倍*/

         if (cachep->colour_off < align)

                   cachep->colour_off = align;

    /*得出碎片區域需要多少個著色塊*/

         cachep->colour = left_over / cachep->colour_off;

    /*管理物件大小*/

         cachep->slab_size = slab_size;

    cachep->flags = flags;

         cachep->gfpflags = 0;

    /*對於arm無需關注下面的if,因為不需考慮DMA*/

         if (CONFIG_ZONE_DMA_FLAG && (flags & SLAB_CACHE_DMA))

                   cachep->gfpflags |= GFP_DMA;

    /*slab物件的大小*/

         cachep->buffer_size = size;

    /*slab物件的大小的倒數,計算物件在slab中索引時用,參見obj_to_index函式 */

         cachep->reciprocal_buffer_size = reciprocal_value(size);

    /*外接slab,這裡分配一個slab管理物件,儲存在slabp_cache中,如果是內建式的slab,此指標為空*/

         if (flags & CFLGS_OFF_SLAB) {

                   cachep->slabp_cache = kmem_find_general_cachep(slab_size, 0u);

                   /*

                    * This is a possibility for one of the malloc_sizes caches.

                    * But since we go off slab only for object size greater than

                    * PAGE_SIZE/8, and malloc_sizes gets created in ascending order,

                    * this should not happen at all.

                    * But leave a BUG_ON for some lucky dude.

                    */

                   BUG_ON(ZERO_OR_NULL_PTR(cachep->slabp_cache));

         }

    /*cache的建構函式和名字*/

         cachep->ctor = ctor;

         cachep->name = name;

    /*設定每個cpu上的local cache,配置local cacheslab三鏈*/

         if (setup_cpu_cache(cachep, gfp)) {

                   __kmem_cache_destroy(cachep);

                   cachep = NULL;

                   goto oops;

         }

         /* cache setup completed, link it into the list */

         list_add(&cachep->next, &cache_chain);

oops:

         if (!cachep && (flags & SLAB_PANIC))

                   panic("kmem_cache_create(): failed to create slab `%s'\n",

                         name);

         if (slab_is_available()) {

                   mutex_unlock(&cache_chain_mutex);

                   put_online_cpus();

         }

         return cachep;

}

---------------------------------------------------------------------------------------------------------------------------------

直到函式中的“if (slab_is_available()) gfp = GFP_KERNEL;”這裡,前面的都可以不用關注,分別是執行環境和引數的檢查(需要注意本函式會可能睡眠,所以絕不能在中斷中呼叫本函式)、一堆對齊機制的東西,看看這一段:

if (slab_is_available())

         gfp = GFP_KERNEL;

else

         gfp = GFP_NOWAIT;

到這裡首先根據當前slab是否初始化完成確定變數gfp的值,gfp並不陌生,它規定了從夥伴系統尋找記憶體的地點和方式,這裡的在slab初始化完成時gfp值為GFP_KERNEL說明了為什麼可能會睡眠,而slab初始化完成之前gfp值為GFP_NOWAIT說明不會睡眠;

---------------------------------------------------------------------------------------------------------------------------------

接下來是獲取一個kmem_cache結構,呼叫kmem_cache_zalloc,它和kmem_cache_zalloc唯一區別就是會對所分配區域進行清零操作,即在kmem_cache_alloc函式的gfp引數中加入標誌__GFP_ZERO,其他沒有區別;

由前面2.1節的分析已知,如果想要通過slab分配器獲取某長度的記憶體,必須建立這樣的“規則”,那麼現在需要一個kmem_cache結構體長度的記憶體,同樣也是需要一個該長度的“規則”,沒錯該長度的“規則”也是在初始化函式kmem_cache_init中建立,而我們建立這個“規則”的結果就是全域性變數cache_cache,所以現在需要申請一個kmem_cache結構體長度的記憶體時就通過全域性變數cache_cache這樣一個已建立好的kmem_cache結構變數。

不過全域性變數cache_cache並不是一個理解slab建立的好例子,原因在後面就會明白,理解slab還是繼續觀察函式kmem_cache_create,接下來是確定slab管理物件的儲存方式:

if ((size >= (PAGE_SIZE >> 3)) && !slab_early_init)

         /*

          * Size is large, assume best to place the slab management obj

          * off-slab (should allow better packing of objs).

          */

         flags |= CFLGS_OFF_SLAB;

這裡引出了slab管理物件的儲存方式,分為內建和外接,簡單的說,內建就是說slab管理部分的內容和實際供使用的記憶體都在申請到的記憶體區域中,外接slab管理部分的內容自己再單獨申請一個記憶體區域,和實際申請的記憶體區域分開,所謂slab管理部分,包括slab結構體、物件描述符,後面會細緻描述,這裡的if的意思是,當slab初始化完成後,如果建立的“規則”的記憶體長度大於(PAGE_SIZE >> 3)512位元組時,就使用外接方式,否則使用內建方式,初始化完成之前均使用內建方式。

---------------------------------------------------------------------------------------------------------------------------------

接下來是left_over = calculate_slab_order(cachep, size, align, flags);這是在計算,所建立的“規則”的記憶體長度size,最終建立的slab將應該有多少個物理頁面、有多少個這樣size的物件、有多少碎片(碎片就是說申請的記憶體長度除了物件以外剩下的不能用的記憶體的長度)

static size_t calculate_slab_order(struct kmem_cache *cachep,

                            size_t size, size_t align, unsigned long flags)

{

         unsigned long offslab_limit;

         size_t left_over = 0;

         int gfporder;

         for (gfporder = 0; gfporder <= KMALLOC_MAX_ORDER; gfporder++) {

                   unsigned int num;

                   size_t remainder;

        /*計算每個slab中物件的數目、浪費空間大小

引數: gfporder: slab大小為2^gfporder個頁面

                buffer_size: 物件大小

                align: 物件的對齊方式

                flags: 是外接slab還是內建slab

                remainder: slab中浪費的空間(碎片)是多少

                num: slab中物件個數*/

                   cache_estimate(gfporder, size, align, flags, &remainder, &num);

                   if (!num)

                            continue;

                   if (flags & CFLGS_OFF_SLAB) {

                            /*

                             * Max number of objs-per-slab for caches which

                             * use off-slab slabs. Needed to avoid a possible

                             * looping condition in cache_grow().

                             */

                            offslab_limit = size - sizeof(struct slab);

                            offslab_limit /= sizeof(kmem_bufctl_t);

                           if (num > offslab_limit)

                                     break;

                   }