1. 程式人生 > 資料庫 >Redis中LRU淘汰策略的深入分析

Redis中LRU淘汰策略的深入分析

前言

Redis作為快取使用時,一些場景下要考慮記憶體的空間消耗問題。Redis會刪除過期鍵以釋放空間,過期鍵的刪除策略有兩種:

  • 惰性刪除:每次從鍵空間中獲取鍵時,都檢查取得的鍵是否過期,如果過期的話,就刪除該鍵;如果沒有過期,就返回該鍵。
  • 定期刪除:每隔一段時間,程式就對資料庫進行一次檢查,刪除裡面的過期鍵。

另外,Redis也可以開啟LRU功能來自動淘汰一些鍵值對。

LRU演算法

當需要從快取中淘汰資料時,我們希望能淘汰那些將來不可能再被使用的資料,保留那些將來還會頻繁訪問的資料,但最大的問題是快取並不能預言未來。一個解決方法就是通過LRU進行預測:最近被頻繁訪問的資料將來被訪問的可能性也越大。快取中的資料一般會有這樣的訪問分佈:一部分資料擁有絕大部分的訪問量。當訪問模式很少改變時,可以記錄每個資料的最後一次訪問時間,擁有最少空閒時間的資料可以被認為將來最有可能被訪問到。

舉例如下的訪問模式,A每5s訪問一次,B每2s訪問一次,C與D每10s訪問一次,|代表計算空閒時間的截止點:

~~~~~A~~~~~A~~~~~A~~~~A~~~~~A~~~~~A~~|
~~B~~B~~B~~B~~B~~B~~B~~B~~B~~B~~B~~B~|
~~~~~~~~~~C~~~~~~~~~C~~~~~~~~~C~~~~~~|
~~~~~D~~~~~~~~~~D~~~~~~~~~D~~~~~~~~~D|

可以看到,LRU對於A、B、C工作的很好,完美預測了將來被訪問到的概率B>A>C,但對於D卻預測了最少的空閒時間。

但是,總體來說,LRU演算法已經是一個性能足夠好的演算法了

LRU配置引數

Redis配置中和LRU有關的有三個:

  • maxmemory: 配置Redis儲存資料時指定限制的記憶體大小,比如100m。當快取消耗的記憶體超過這個數值時,將觸發資料淘汰。該資料配置為0時,表示快取的資料量沒有限制,即LRU功能不生效。64位的系統預設值為0,32位的系統預設記憶體限制為3GB
  • maxmemory_policy: 觸發資料淘汰後的淘汰策略
  • maxmemory_samples: 隨機取樣的精度,也就是隨即取出key的數目。該數值配置越大,越接近於真實的LRU演算法,但是數值越大,相應消耗也變高,對效能有一定影響,樣本值預設為5。

淘汰策略

淘汰策略即maxmemory_policy的賦值有以下幾種:

  • noeviction:如果快取資料超過了maxmemory限定值,並且客戶端正在執行的命令(大部分的寫入指令,但DEL和幾個指令例外)會導致記憶體分配,則向客戶端返回錯誤響應
  • allkeys-lru: 對所有的鍵都採取LRU淘汰
  • volatile-lru: 僅對設定了過期時間的鍵採取LRU淘汰
  • allkeys-random: 隨機回收所有的鍵
  • volatile-random: 隨機回收設定過期時間的鍵
  • volatile-ttl: 僅淘汰設定了過期時間的鍵---淘汰生存時間TTL(Time To Live)更小的鍵

volatile-lru,volatile-random和volatile-ttl這三個淘汰策略使用的不是全量資料,有可能無法淘汰出足夠的記憶體空間。在沒有過期鍵或者沒有設定超時屬性的鍵的情況下,這三種策略和noeviction差不多。

一般的經驗規則:

  • 使用allkeys-lru策略:當預期請求符合一個冪次分佈(二八法則等),比如一部分的子集元素比其它其它元素被訪問的更多時,可以選擇這個策略。
  • 使用allkeys-random:迴圈連續的訪問所有的鍵時,或者預期請求分佈平均(所有元素被訪問的概率都差不多)
  • 使用volatile-ttl:要採取這個策略,快取物件的TTL值最好有差異

volatile-lru 和 volatile-random策略,當你想要使用單一的Redis例項來同時實現快取淘汰和持久化一些經常使用的鍵集合時很有用。未設定過期時間的鍵進行持久化儲存,設定了過期時間的鍵參與快取淘汰。不過一般執行兩個例項是解決這個問題的更好方法。

為鍵設定過期時間也是需要消耗記憶體的,所以使用allkeys-lru這種策略更加節省空間,因為這種策略下可以不為鍵設定過期時間。

近似LRU演算法

我們知道,LRU演算法需要一個雙向連結串列來記錄資料的最近被訪問順序,但是出於節省記憶體的考慮,Redis的LRU演算法並非完整的實現。Redis並不會選擇最久未被訪問的鍵進行回收,相反它會嘗試執行一個近似LRU的演算法,通過對少量鍵進行取樣,然後回收其中的最久未被訪問的鍵。通過調整每次回收時的取樣數量maxmemory-samples,可以實現調整演算法的精度。

根據Redis作者的說法,每個Redis Object可以擠出24 bits的空間,但24 bits是不夠儲存兩個指標的,而儲存一個低位時間戳是足夠的,Redis Object以秒為單位儲存了物件新建或者更新時的unix time,也就是LRU clock,24 bits資料要溢位的話需要194天,而快取的資料更新非常頻繁,已經足夠了。

Redis的鍵空間是放在一個雜湊表中的,要從所有的鍵中選出一個最久未被訪問的鍵,需要另外一個數據結構儲存這些源資訊,這顯然不划算。最初,Redis只是隨機的選3個key,然後從中淘汰,後來演算法改進到了N個key的策略,預設是5個。

Redis3.0之後又改善了演算法的效能,會提供一個待淘汰候選key的pool,裡面預設有16個key,按照空閒時間排好序。更新時從Redis鍵空間隨機選擇N個key,分別計算它們的空閒時間idle,key只會在pool不滿或者空閒時間大於pool裡最小的時,才會進入pool,然後從pool中選擇空閒時間最大的key淘汰掉。

真實LRU演算法與近似LRU的演算法可以通過下面的影象對比:

淺灰色帶是已經被淘汰的物件,灰色帶是沒有被淘汰的物件,綠色帶是新新增的物件。可以看出,maxmemory-samples值為5時Redis 3.0效果比Redis 2.8要好。使用10個取樣大小的Redis 3.0的近似LRU演算法已經非常接近理論的效能了。

資料訪問模式非常接近冪次分佈時,也就是大部分的訪問集中於部分鍵時,LRU近似演算法會處理得很好。

在模擬實驗的過程中,我們發現如果使用冪次分佈的訪問模式,真實LRU演算法和近似LRU演算法幾乎沒有差別。

LRU原始碼分析

Redis中的鍵與值都是redisObject物件:

typedef struct redisObject {
 unsigned type:4;
 unsigned encoding:4;
 unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or
       * LFU data (least significant 8 bits frequency
       * and most significant 16 bits access time). */
 int refcount;
 void *ptr;
} robj;

unsigned的低24 bits的lru記錄了redisObj的LRU time。

Redis命令訪問快取的資料時,均會呼叫函式lookupKey:

robj *lookupKey(redisDb *db,robj *key,int flags) {
 dictEntry *de = dictFind(db->dict,key->ptr);
 if (de) {
  robj *val = dictGetVal(de);

  /* Update the access time for the ageing algorithm.
   * Don't do it if we have a saving child,as this will trigger
   * a copy on write madness. */
  if (server.rdb_child_pid == -1 &&
   server.aof_child_pid == -1 &&
   !(flags & LOOKUP_NOTOUCH))
  {
   if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
    updateLFU(val);
   } else {
    val->lru = LRU_CLOCK();
   }
  }
  return val;
 } else {
  return NULL;
 }
}

該函式在策略為LRU(非LFU)時會更新物件的lru值,設定為LRU_CLOCK()值:

/* Return the LRU clock,based on the clock resolution. This is a time
 * in a reduced-bits format that can be used to set and check the
 * object->lru field of redisObject structures. */
unsigned int getLRUClock(void) {
 return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}

/* This function is used to obtain the current LRU clock.
 * If the current resolution is lower than the frequency we refresh the
 * LRU clock (as it should be in production servers) we return the
 * precomputed value,otherwise we need to resort to a system call. */
unsigned int LRU_CLOCK(void) {
 unsigned int lruclock;
 if (1000/server.hz <= LRU_CLOCK_RESOLUTION) {
  atomicGet(server.lruclock,lruclock);
 } else {
  lruclock = getLRUClock();
 }
 return lruclock;
}

LRU_CLOCK()取決於LRU_CLOCK_RESOLUTION(預設值1000),LRU_CLOCK_RESOLUTION代表了LRU演算法的精度,即一個LRU的單位是多長。server.hz代表伺服器重新整理的頻率,如果伺服器的時間更新精度值比LRU的精度值要小,LRU_CLOCK()直接使用伺服器的時間,減小開銷。

Redis處理命令的入口是processCommand:

int processCommand(client *c) {

 /* Handle the maxmemory directive.
  *
  * Note that we do not want to reclaim memory if we are here re-entering
  * the event loop since there is a busy Lua script running in timeout
  * condition,to avoid mixing the propagation of scripts with the
  * propagation of DELs due to eviction. */
 if (server.maxmemory && !server.lua_timedout) {
  int out_of_memory = freeMemoryIfNeededAndSafe() == C_ERR;
  /* freeMemoryIfNeeded may flush slave output buffers. This may result
   * into a slave,that may be the active client,to be freed. */
  if (server.current_client == NULL) return C_ERR;

  /* It was impossible to free enough memory,and the command the client
   * is trying to execute is denied during OOM conditions or the client
   * is in MULTI/EXEC context? Error. */
  if (out_of_memory &&
   (c->cmd->flags & CMD_DENYOOM ||
    (c->flags & CLIENT_MULTI && c->cmd->proc != execCommand))) {
   flagTransaction(c);
   addReply(c,shared.oomerr);
   return C_OK;
  }
 }
}

只列出了釋放記憶體空間的部分,freeMemoryIfNeededAndSafe為釋放記憶體的函式:

int freeMemoryIfNeeded(void) {
 /* By default replicas should ignore maxmemory
  * and just be masters exact copies. */
 if (server.masterhost && server.repl_slave_ignore_maxmemory) return C_OK;

 size_t mem_reported,mem_tofree,mem_freed;
 mstime_t latency,eviction_latency;
 long long delta;
 int slaves = listLength(server.slaves);

 /* When clients are paused the dataset should be static not just from the
  * POV of clients not being able to write,but also from the POV of
  * expires and evictions of keys not being performed. */
 if (clientsArePaused()) return C_OK;
 if (getMaxmemoryState(&mem_reported,NULL,&mem_tofree,NULL) == C_OK)
  return C_OK;

 mem_freed = 0;

 if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION)
  goto cant_free; /* We need to free memory,but policy forbids. */

 latencyStartMonitor(latency);
 while (mem_freed < mem_tofree) {
  int j,k,i,keys_freed = 0;
  static unsigned int next_db = 0;
  sds bestkey = NULL;
  int bestdbid;
  redisDb *db;
  dict *dict;
  dictEntry *de;

  if (server.maxmemory_policy & (MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_LFU) ||
   server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL)
  {
   struct evictionPoolEntry *pool = EvictionPoolLRU;

   while(bestkey == NULL) {
    unsigned long total_keys = 0,keys;

    /* We don't want to make local-db choices when expiring keys,* so to start populate the eviction pool sampling keys from
     * every DB. */
    for (i = 0; i < server.dbnum; i++) {
     db = server.db+i;
     dict = (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) ?
       db->dict : db->expires;
     if ((keys = dictSize(dict)) != 0) {
      evictionPoolPopulate(i,dict,db->dict,pool);
      total_keys += keys;
     }
    }
    if (!total_keys) break; /* No keys to evict. */

    /* Go backward from best to worst element to evict. */
    for (k = EVPOOL_SIZE-1; k >= 0; k--) {
     if (pool[k].key == NULL) continue;
     bestdbid = pool[k].dbid;

     if (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) {
      de = dictFind(server.db[pool[k].dbid].dict,pool[k].key);
     } else {
      de = dictFind(server.db[pool[k].dbid].expires,pool[k].key);
     }

     /* Remove the entry from the pool. */
     if (pool[k].key != pool[k].cached)
      sdsfree(pool[k].key);
     pool[k].key = NULL;
     pool[k].idle = 0;

     /* If the key exists,is our pick. Otherwise it is
      * a ghost and we need to try the next element. */
     if (de) {
      bestkey = dictGetKey(de);
      break;
     } else {
      /* Ghost... Iterate again. */
     }
    }
   }
  }

  /* volatile-random and allkeys-random policy */
  else if (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM ||
     server.maxmemory_policy == MAXMEMORY_VOLATILE_RANDOM)
  {
   /* When evicting a random key,we try to evict a key for
    * each DB,so we use the static 'next_db' variable to
    * incrementally visit all DBs. */
   for (i = 0; i < server.dbnum; i++) {
    j = (++next_db) % server.dbnum;
    db = server.db+j;
    dict = (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM) ?
      db->dict : db->expires;
    if (dictSize(dict) != 0) {
     de = dictGetRandomKey(dict);
     bestkey = dictGetKey(de);
     bestdbid = j;
     break;
    }
   }
  }

  /* Finally remove the selected key. */
  if (bestkey) {
   db = server.db+bestdbid;
   robj *keyobj = createStringObject(bestkey,sdslen(bestkey));
   propagateExpire(db,keyobj,server.lazyfree_lazy_eviction);
   /* We compute the amount of memory freed by db*Delete() alone.
    * It is possible that actually the memory needed to propagate
    * the DEL in AOF and replication link is greater than the one
    * we are freeing removing the key,but we can't account for
    * that otherwise we would never exit the loop.
    *
    * AOF and Output buffer memory will be freed eventually so
    * we only care about memory used by the key space. */
   delta = (long long) zmalloc_used_memory();
   latencyStartMonitor(eviction_latency);
   if (server.lazyfree_lazy_eviction)
    dbAsyncDelete(db,keyobj);
   else
    dbSyncDelete(db,keyobj);
   latencyEndMonitor(eviction_latency);
   latencyAddSampleIfNeeded("eviction-del",eviction_latency);
   latencyRemoveNestedEvent(latency,eviction_latency);
   delta -= (long long) zmalloc_used_memory();
   mem_freed += delta;
   server.stat_evictedkeys++;
   notifyKeyspaceEvent(NOTIFY_EVICTED,"evicted",db->id);
   decrRefCount(keyobj);
   keys_freed++;

   /* When the memory to free starts to be big enough,we may
    * start spending so much time here that is impossible to
    * deliver data to the slaves fast enough,so we force the
    * transmission here inside the loop. */
   if (slaves) flushSlavesOutputBuffers();

   /* Normally our stop condition is the ability to release
    * a fixed,pre-computed amount of memory. However when we
    * are deleting objects in another thread,it's better to
    * check,from time to time,if we already reached our target
    * memory,since the "mem_freed" amount is computed only
    * across the dbAsyncDelete() call,while the thread can
    * release the memory all the time. */
   if (server.lazyfree_lazy_eviction && !(keys_freed % 16)) {
    if (getMaxmemoryState(NULL,NULL) == C_OK) {
     /* Let's satisfy our stop condition. */
     mem_freed = mem_tofree;
    }
   }
  }

  if (!keys_freed) {
   latencyEndMonitor(latency);
   latencyAddSampleIfNeeded("eviction-cycle",latency);
   goto cant_free; /* nothing to free... */
  }
 }
 latencyEndMonitor(latency);
 latencyAddSampleIfNeeded("eviction-cycle",latency);
 return C_OK;

cant_free:
 /* We are here if we are not able to reclaim memory. There is only one
  * last thing we can try: check if the lazyfree thread has jobs in queue
  * and wait... */
 while(bioPendingJobsOfType(BIO_LAZY_FREE)) {
  if (((mem_reported - zmalloc_used_memory()) + mem_freed) >= mem_tofree)
   break;
  usleep(1000);
 }
 return C_ERR;
}

/* This is a wrapper for freeMemoryIfNeeded() that only really calls the
 * function if right now there are the conditions to do so safely:
 *
 * - There must be no script in timeout condition.
 * - Nor we are loading data right now.
 *
 */
int freeMemoryIfNeededAndSafe(void) {
 if (server.lua_timedout || server.loading) return C_OK;
 return freeMemoryIfNeeded();
}

幾種淘汰策略maxmemory_policy就是在這個函式裡面實現的。

當採用LRU時,可以看到,從0號資料庫開始(預設16個),根據不同的策略,選擇redisDb的dict(全部鍵)或者expires(有過期時間的鍵),用來更新候選鍵池子pool,pool更新策略是evictionPoolPopulate:

void evictionPoolPopulate(int dbid,dict *sampledict,dict *keydict,struct evictionPoolEntry *pool) {
 int j,count;
 dictEntry *samples[server.maxmemory_samples];

 count = dictGetSomeKeys(sampledict,samples,server.maxmemory_samples);
 for (j = 0; j < count; j++) {
  unsigned long long idle;
  sds key;
  robj *o;
  dictEntry *de;

  de = samples[j];
  key = dictGetKey(de);

  /* If the dictionary we are sampling from is not the main
   * dictionary (but the expires one) we need to lookup the key
   * again in the key dictionary to obtain the value object. */
  if (server.maxmemory_policy != MAXMEMORY_VOLATILE_TTL) {
   if (sampledict != keydict) de = dictFind(keydict,key);
   o = dictGetVal(de);
  }

  /* Calculate the idle time according to the policy. This is called
   * idle just because the code initially handled LRU,but is in fact
   * just a score where an higher score means better candidate. */
  if (server.maxmemory_policy & MAXMEMORY_FLAG_LRU) {
   idle = estimateObjectIdleTime(o);
  } else if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
   /* When we use an LRU policy,we sort the keys by idle time
    * so that we expire keys starting from greater idle time.
    * However when the policy is an LFU one,we have a frequency
    * estimation,and we want to evict keys with lower frequency
    * first. So inside the pool we put objects using the inverted
    * frequency subtracting the actual frequency to the maximum
    * frequency of 255. */
   idle = 255-LFUDecrAndReturn(o);
  } else if (server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL) {
   /* In this case the sooner the expire the better. */
   idle = ULLONG_MAX - (long)dictGetVal(de);
  } else {
   serverPanic("Unknown eviction policy in evictionPoolPopulate()");
  }

  /* Insert the element inside the pool.
   * First,find the first empty bucket or the first populated
   * bucket that has an idle time smaller than our idle time. */
  k = 0;
  while (k < EVPOOL_SIZE &&
    pool[k].key &&
    pool[k].idle < idle) k++;
  if (k == 0 && pool[EVPOOL_SIZE-1].key != NULL) {
   /* Can't insert if the element is < the worst element we have
    * and there are no empty buckets. */
   continue;
  } else if (k < EVPOOL_SIZE && pool[k].key == NULL) {
   /* Inserting into empty position. No setup needed before insert. */
  } else {
   /* Inserting in the middle. Now k points to the first element
    * greater than the element to insert. */
   if (pool[EVPOOL_SIZE-1].key == NULL) {
    /* Free space on the right? Insert at k shifting
     * all the elements from k to end to the right. */

    /* Save SDS before overwriting. */
    sds cached = pool[EVPOOL_SIZE-1].cached;
    memmove(pool+k+1,pool+k,sizeof(pool[0])*(EVPOOL_SIZE-k-1));
    pool[k].cached = cached;
   } else {
    /* No free space on right? Insert at k-1 */
    k--;
    /* Shift all elements on the left of k (included) to the
     * left,so we discard the element with smaller idle time. */
    sds cached = pool[0].cached; /* Save SDS before overwriting. */
    if (pool[0].key != pool[0].cached) sdsfree(pool[0].key);
    memmove(pool,pool+1,sizeof(pool[0])*k);
    pool[k].cached = cached;
   }
  }

  /* Try to reuse the cached SDS string allocated in the pool entry,* because allocating and deallocating this object is costly
   * (according to the profiler,not my fantasy. Remember:
   * premature optimizbla bla bla bla. */
  int klen = sdslen(key);
  if (klen > EVPOOL_CACHED_SDS_SIZE) {
   pool[k].key = sdsdup(key);
  } else {
   memcpy(pool[k].cached,key,klen+1);
   sdssetlen(pool[k].cached,klen);
   pool[k].key = pool[k].cached;
  }
  pool[k].idle = idle;
  pool[k].dbid = dbid;
 }
}

Redis隨機選擇maxmemory_samples數量的key,然後計算這些key的空閒時間idle time,當滿足條件時(比pool中的某些鍵的空閒時間還大)就可以進pool。pool更新之後,就淘汰pool中空閒時間最大的鍵。

estimateObjectIdleTime用來計算Redis物件的空閒時間:

/* Given an object returns the min number of milliseconds the object was never
 * requested,using an approximated LRU algorithm. */
unsigned long long estimateObjectIdleTime(robj *o) {
 unsigned long long lruclock = LRU_CLOCK();
 if (lruclock >= o->lru) {
  return (lruclock - o->lru) * LRU_CLOCK_RESOLUTION;
 } else {
  return (lruclock + (LRU_CLOCK_MAX - o->lru)) *
     LRU_CLOCK_RESOLUTION;
 }
}

空閒時間基本就是就是物件的lru和全域性的LRU_CLOCK()的差值乘以精度LRU_CLOCK_RESOLUTION,將秒轉化為了毫秒。

參考連結

  • Random notes on improving the Redis LRU algorithm
  • Using Redis as an LRU cache

總結

以上就是這篇文章的全部內容了,希望本文的內容對大家的學習或者工作具有一定的參考學習價值,謝謝大家對我們的支援。