1. 程式人生 > >redis RDB 和AOF

redis RDB 和AOF

強制 with ocr load isp round work 屬性。 端口

參考文獻

  1. Redis源碼學習-AOF數據持久化原理分析(0)
  2. Redis源碼學習-AOF數據持久化原理分析(1)
  3. Redis · 特性分析 · AOF Rewrite 分析
  4. 深入剖析 redis AOF 持久化策略
  5. 函數sync、fsync與fdatasync總結整理
    redis是一個內存數據庫,它將數據保存在自己的內存之中。這意味著如果機器宕機或者斷電,將會導致內存中的數據失效。為了能讓數據不會出現丟失的情況,redis提供了RDB和AOF兩種持久化的功能。接下來講分別介紹RDB和AOF的原理和實現過程

RDB

在redis中提供了兩個命令用於生成RDB文件:

  1. SAVE
  2. BGSAVE

通過名字上的區別,可以大體看出兩個命令的實現方式。SAVE命令在執行期間會阻塞服務進程。而BGSAVE命令在執行期間會fork一個子進程,然後由子進程來完成RDB文件的生成。 值得一提的是,redis在啟動的時候如果存在AOF文件會優先使用AOF文件還原數據庫狀態。如果不存在則默認使用RDB文件還原。
BGSAVE在執行過程中仍然可以接受客戶端的命令。但是對於SAVE 、BGSVAE、BGREWRITEAOF的執行邏輯和平時有所不同。 在BGSAVE執行的過程中,SAVE和BGSAVE命令都將會被服務器禁止。而BGREWRITEAOF要等到BGSAVE執行完畢之後,才能執行。

tip:

  1. 執行lastsave命令可以獲取最後一次生成RDB的時間,對應於info統計中的rdb_last_save_time

RDB 間隔性自動保存

redis允許用戶通過設定服務器的save選項讓服務器每隔一段時間調用BGSVAE自動生成一次RDB。
例如我們在配置文件中寫入如下的配置:

save 900 1 
save 300 10 
save 60 10000

那麽當以下的三個條件之一被滿足了之後,BGSAVE命令就會執行:

  1. 服務器在900秒內對數據庫至少進行了1次修改
  2. 服務器在300秒內對數據庫至少進行了10次修改
  3. 服務器在60秒內至少對數據庫進行了10000次修改

tip:

  1. 如果redis啟動的時候,用戶沒有通過指定配置文件和傳入啟動參數的方式設置save選項。那麽服務器會為save設置默認的值:
save 900 1 
save 300 10 
save 60 10000
  1. RDB 文件的保存路徑在dir配置下指定。文件名通過dbfilename配置指定。可以通過動態命令config set dir ${newdir} 和config set dbfilename ${newfilename} 在運行期間動態的修改。
  2. redis默認的情況下對於RDB文件采用LZF算法進行壓縮。可以通過config set rdbcompression {yes|no} 來開啟或者關閉。
  3. redis-check-dump 工具可以用來檢測RDB文件

自動保存的基本實現

在redisServer結構體中有saveparams屬性:

struct redisServer{
    struct saveparam * saveparams;
    
}

saveparams是一個數組,數組中的每一個元素都保存了一組參數:

struct saveparam {
    time_t seconds;//時間
    int changes;//修改數
}

在redisServer結構體中還維持了一個dirty計數器和lastsave屬性。

  1. dirty:記錄了距離上一次成功執行save或者bgsave命令之後,服務器對數據庫進行了多少次的修改。
  2. lastsave:這個屬性是個Unix時間戳,記錄了服務器上次執行bgsave或者save到現在的時間。

    時間事件檢查條件是否滿足

    redis服務器周期性函數serverCron默認的執行時間間隔是100ms。其中有一項工作就是檢查save的條件是否滿足。在這個函數中,會依次遍歷saveparams中的參數,看是否滿足條件。只要有一個條件被滿足,服務器就會執行bgsave指令。在執行之後,清零dirty並且將lastsave 屬性更新為當前時間。

AOF

AOF文件的格式如下所示:

*<count>\r\n$<length>\r\n<content>\r\n

例如一個命令 :

select 0 
set k1 v1 

可以被翻譯成如下的AOF格式:

    ##AOF文件格式,其中"##"為註釋,非文件實際內容  
    ##選擇DB  
    *2  
    $6  
    SELECT  
    $1  
    0  
    ##SET k-v  
    *3  
    $3  
    SET  
    $2  
    k1  
    $2  
    v1  

AOF的寫入

調用路徑

如果打開了redis的aof寫入配置,則一個命令從到達服務器到最後寫入aof大概要經過以下幾個路徑的調用:

  1. aeMain
  2. aeProcessEvents
  3. readQueryFromClient
  4. processInputBuffer
  5. processCommand
  6. call
  7. propagate
  8. feedAppendOnlyFile

命令轉換為AOF格式的過程

feedAppendOnlyFile函數

可以看出,真正起作用的函數是aof.c中的feedAppendOnlyFile函數。函數的具體定義如下:

void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, robj **argv, int argc) {
    sds buf = sdsempty();
    robj *tmpargv[3];

    /* The DB this command was targeting is not the same as the last command
     * we appendend. To issue a SELECT command is needed.
     *
     * 使用 SELECT 命令,顯式設置數據庫,確保之後的命令被設置到正確的數據庫
     */
    if (dictid != server.aof_selected_db) {
        char seldb[64];

        snprintf(seldb,sizeof(seldb),"%d",dictid);
        buf = sdscatprintf(buf,"*2\r\n$6\r\nSELECT\r\n$%lu\r\n%s\r\n",
            (unsigned long)strlen(seldb),seldb);

        server.aof_selected_db = dictid;
    }

    // EXPIRE 、 PEXPIRE 和 EXPIREAT 命令
    if (cmd->proc == expireCommand || cmd->proc == pexpireCommand ||
        cmd->proc == expireatCommand) {
        /* Translate EXPIRE/PEXPIRE/EXPIREAT into PEXPIREAT
         *
         * 將 EXPIRE 、 PEXPIRE 和 EXPIREAT 都翻譯成 PEXPIREAT
         */
        buf = catAppendOnlyExpireAtCommand(buf,cmd,argv[1],argv[2]);

    // SETEX 和 PSETEX 命令
    } else if (cmd->proc == setexCommand || cmd->proc == psetexCommand) {
        /* Translate SETEX/PSETEX to SET and PEXPIREAT
         *
         * 將兩個命令都翻譯成 SET 和 PEXPIREAT
         */
 // SET
        tmpargv[0] = createStringObject("SET",3);
        tmpargv[1] = argv[1];
        tmpargv[2] = argv[3];
        buf = catAppendOnlyGenericCommand(buf,3,tmpargv);

        // PEXPIREAT
        decrRefCount(tmpargv[0]);
        buf = catAppendOnlyExpireAtCommand(buf,cmd,argv[1],argv[2]);

    // 其他命令
    } else {
        /* All the other commands don‘t need translation or need the
         * same translation already operated in the command vector
         * for the replication itself. */
        buf = catAppendOnlyGenericCommand(buf,argc,argv);
    }

    /* Append to the AOF buffer. This will be flushed on disk just before
     * of re-entering the event loop, so before the client will get a
     * positive reply about the operation performed.
     *
     * 將命令追加到 AOF 緩存中,
     * 在重新進入事件循環之前,這些命令會被沖洗到磁盤上,
     * 並向客戶端返回一個回復。
     */
    if (server.aof_state == REDIS_AOF_ON)
        server.aof_buf = sdscatlen(server.aof_buf,buf,sdslen(buf));

    /* If a background append only file rewriting is in progress we want to
     * accumulate the differences between the child DB and the current one
     * in a buffer, so that when the child process will do its work we
     * can append the differences to the new append only file.
     *
     * 如果 BGREWRITEAOF 正在進行,
     * 那麽我們還需要將命令追加到重寫緩存中,
     * 從而記錄當前正在重寫的 AOF 文件和數據庫當前狀態的差異。
     */
    if (server.aof_child_pid != -1)
        aofRewriteBufferAppend((unsigned char*)buf,sdslen(buf));

    // 釋放
    sdsfree(buf);
}
  1. 如果是select 命令,將其轉換為對用的格式。並將AOF的當前目標數據庫設定為dictid的值
  2. 如果是EXPIRE 、 PEXPIRE 和 EXPIREAT 命令則都翻譯成PEXPIREAT
  3. 如果是SETEX 和 PSETEX命令,則翻譯成 SET 和 PEXPIREAT
  4. 如果是其他命令則使用catAppendOnlyGenericCommand函數將命令轉換成AOF的格式。

catAppendOnlyGenericCommand 函數

catAppendOnlyGenericCommand函數的實現如下:

/*
 * 根據傳入的命令和命令參數,將它們還原成協議格式。
 */
sds catAppendOnlyGenericCommand(sds dst, int argc, robj **argv) {
    char buf[32];
    int len, j;
    robj *o;

    // 重建命令的個數,格式為 *<count>\r\n
    // 例如 *3\r\n
    buf[0] = ‘*‘;
    len = 1+ll2string(buf+1,sizeof(buf)-1,argc);
    buf[len++] = ‘\r‘;
    buf[len++] = ‘\n‘;
    dst = sdscatlen(dst,buf,len);

    // 重建命令和命令參數,格式為 $<length>\r\n<content>\r\n
    // 例如 $3\r\nSET\r\n$3\r\nKEY\r\n$5\r\nVALUE\r\n
    for (j = 0; j < argc; j++) {
        o = getDecodedObject(argv[j]);

        // 組合 $<length>\r\n
        buf[0] = ‘$‘;
        len = 1+ll2string(buf+1,sizeof(buf)-1,sdslen(o->ptr));
        buf[len++] = ‘\r‘;
        buf[len++] = ‘\n‘;
        dst = sdscatlen(dst,buf,len);

        // 組合 <content>\r\n
        dst = sdscatlen(dst,o->ptr,sdslen(o->ptr));
        dst = sdscatlen(dst,"\r\n",2);

        decrRefCount(o);
    }

    // 返回重建後的協議內容
    return dst;
}

可以見得函數其實做了2個事情:

  1. 根據一個命令的個數,創建*
  2. 重建命令和命令參數,格式為 $

寫入AOF

在將一個命令生成了AOF格式的數據之後,會將AOF數據放入server.aof_buf(AOF緩存區)中。如果 BGREWRITEAOF 正在進行,那麽還需要將命令追加到重寫緩存中,從而記錄當前正在重寫的 AOF 文件和數據庫當前狀態的差異。

void feedAppendOnlyFile(struct redisCommand *cmd, int dictid, robj **argv, int argc) {
    ....
    ....
    if (server.aof_state == REDIS_AOF_ON)
        server.aof_buf = sdscatlen(server.aof_buf,buf,sdslen(buf));

    /* If a background append only file rewriting is in progress we want to
     * accumulate the differences between the child DB and the current one
     * in a buffer, so that when the child process will do its work we
     * can append the differences to the new append only file.
     *
     * 如果 BGREWRITEAOF 正在進行,
     * 那麽我們還需要將命令追加到重寫緩存中,
     * 從而記錄當前正在重寫的 AOF 文件和數據庫當前狀態的差異。
     */
    if (server.aof_child_pid != -1)
        aofRewriteBufferAppend((unsigned char*)buf,sdslen(buf));

    // 釋放
    sdsfree(buf);
}

AOF重寫

從前面的部分可以看出,當Redis在運行過程中如果打開了AOF的功能,則隨著時間的推移AOF文件會越來越大。因此Redis提供了AOF重寫功能。觸發AOF重寫的時機有2個:

  1. 用戶設置“config set appendonly yes”開啟AOF的時候調用一次
  2. 用戶設置“bgrewriteaof”命令的時候,如果當前沒有aof/rdb進程在持久化數據,則調用一次;
  3. 如果用戶設置了auto-aof-rewrite-percentage和auto-aof-rewrite-min-size指令,且aof文件增長到min-size以上,並且增長率大於percentage的時候,自動觸發AOF rewrite。

上述指令發送的時候,當前已經有進程在處理這個動作了,那麽redis會設置server.aof_rewrite_scheduled標誌。然後在serverCron定時任務裏面就會判斷這種情況,從而再調用rewriteAppendOnlyFileBackground()。

技術分享圖片

指令打開AOF Rewrite(appendonly yes)

當指令打開了appendonly yes的時候,會調用startAppendOnly函數(aof.c中)來執行。初始化的時候如果配置文件裏面指定了這個選項為打開狀態,當然就會自動從一開始就是有AOF機制的,這種情況下不能發送這個命令,否則redis會直接死掉。函數startAppendOnly的實現如下:

/* Called when the user switches from "appendonly no" to "appendonly yes"
 * at runtime using the CONFIG command.-
 *
 * 當用戶在運行時使用 CONFIG 命令,
 * 從 appendonly no 切換到 appendonly yes 時執行
 */
int startAppendOnly(void) {

    // 將開始時間設為 AOF 最後一次 fsync 時間
    server.aof_last_fsync = server.unixtime;

    // 打開 AOF 文件
    server.aof_fd = open(server.aof_filename,O_WRONLY|O_APPEND|O_CREAT,0644);

    redisAssert(server.aof_state == REDIS_AOF_OFF);

    // 文件打開失敗
    if (server.aof_fd == -1) {
        redisLog(REDIS_WARNING,"Redis needs to enable the AOF but can‘t open the append only file: %s",strerror(errno));
        return REDIS_ERR;
    }

    if (rewriteAppendOnlyFileBackground() == REDIS_ERR) {
        // AOF 後臺重寫失敗,關閉 AOF 文件
        close(server.aof_fd);
        redisLog(REDIS_WARNING,"Redis needs to enable the AOF but can‘t trigger a background AOF rewrite operation. Check the above logs for more info about the error.");
        return REDIS_ERR;
    }

    /* We correctly switched on AOF, now wait for the rerwite to be complete
     * in order to append data on disk.
     *
     * 等待重寫執行完畢
     */
    server.aof_state = REDIS_AOF_WAIT_REWRITE;

    return REDIS_OK;
}

可以發現真正的快照保存在rewriteAppendOnlyFileBackground函數中。試想下,在Redis運行的過程中,怎麽才能把其某個時刻的數據全部原原本本的,一致的保存起來?

  1. 應用程序自己做快照,比如copy一份數據出來,這個的缺點是需要加鎖,copy的過程中無法支持寫入操作,會導致邏輯“卡住”;
  2. 為了避免第一種情況的卡,應用代碼中實現COW(Copy-on-write)的機制,這樣效率會更高,沒有修改的數據不會導致卡頓。
  3. 寫時重定向(Redirect-on-write),將新寫入的數據寫到其他地方,後續再同步回來,這樣也可以支持,實際上redis的AOF某些方面也借簽了這個。
  4. Split-Mirror技術,這個比較麻煩,需要硬件和軟件支持,一般在存儲系統中應用。

Redis采用了COW技術,利用fork進程的原理,對當前進程建立一個一模一樣的快照。下面來看看rewriteAppendOnlyFileBackground函數的實現:

子進程

當fork了子進程之後,子進程裏面關閉了監聽的端口,然後立馬調用了rewriteAppendOnlyFile函數將數據寫到臨時文件"temp-rewriteaof-bg-%d.aof"中去。

int rewriteAppendOnlyFileBackground(void) {
    pid_t childpid;
    long long start;

    // 已經有進程在進行 AOF 重寫了
    if (server.aof_child_pid != -1) return REDIS_ERR;

    // 記錄 fork 開始前的時間,計算 fork 耗時用
    start = ustime();

    if ((childpid = fork()) == 0) {
        char tmpfile[256];

        /* Child */

        // 關閉網絡連接 fd
        closeListeningSockets(0);

        // 為進程設置名字,方便記認
        redisSetProcTitle("redis-aof-rewrite");

        // 創建臨時文件,並進行 AOF 重寫
        snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof", (int) getpid());
        if (rewriteAppendOnlyFile(tmpfile) == REDIS_OK) {
            size_t private_dirty = zmalloc_get_private_dirty();

            if (private_dirty) {
                redisLog(REDIS_NOTICE,
                    "AOF rewrite: %zu MB of memory used by copy-on-write",
                    private_dirty/(1024*1024));
            }
            // 發送重寫成功信號
            exitFromChild(0);
        } else {
            // 發送重寫失敗信號
            exitFromChild(1);
        }
    }
    .....
    .....
    ..... 
}

父進程

父進程的工作比較簡單,只需要清空server.aof_rewrite_scheduled標誌,避免下次serverCron函數又進行AOF rewrite,然後記錄子進程的pid為server.aof_child_pid ,然後調用updateDictResizePolicy,這個updateDictResizePolicy函數裏面會考慮如果當前正在後臺有快照進程在寫數據,那他不會對字典進行resize,這樣能夠避免COW機制被打亂,導致大量的COW觸發,分配很多內存。

else {
        /* Parent */
        // 記錄執行 fork 所消耗的時間
        server.stat_fork_time = ustime()-start;

        if (childpid == -1) {
            redisLog(REDIS_WARNING,
                "Can‘t rewrite append only file in background: fork: %s",
                strerror(errno));
            return REDIS_ERR;
        }

        redisLog(REDIS_NOTICE,
            "Background append only file rewriting started by pid %d",childpid);

        // 記錄 AOF 重寫的信息
        server.aof_rewrite_scheduled = 0;
        server.aof_rewrite_time_start = time(NULL);
        server.aof_child_pid = childpid;

        // 關閉字典自動 rehash
        updateDictResizePolicy();

        /* We set appendseldb to -1 in order to force the next call to the
         * feedAppendOnlyFile() to issue a SELECT command, so the differences
         * accumulated by the parent into server.aof_rewrite_buf will start
         * with a SELECT statement and it will be safe to merge.
         *
         * 將 aof_selected_db 設為 -1 ,
         * 強制讓 feedAppendOnlyFile() 下次執行時引發一個 SELECT 命令,
         * 從而確保之後新添加的命令會設置到正確的數據庫中
         */
        server.aof_selected_db = -1;
        replicationScriptCacheFlush();
        return REDIS_OK;
    }
    return REDIS_OK; /* unreached */
}

rewriteAppendOnlyFile函數

上面的介紹中,提到了在子進程的運作過程中會調用rewriteAppendOnlyFile函數來重寫AOF文件。下面來介紹下這個函數的實現過程。函數首先打開一個臨時文件"temp-rewriteaof-%d.aof",然後循環每一個db,也就是server.dbnum,一個個將DB的數據,具體的數據格式就是將當前內存的數據還原成跟客戶端的協議格式,文本形式然後寫入到文件中。

/* Write a sequence of commands able to fully rebuild the dataset into
 * "filename". Used both by REWRITEAOF and BGREWRITEAOF.
 *
 * In order to minimize the number of commands needed in the rewritten
 * log Redis uses variadic commands when possible, such as RPUSH, SADD
 * and ZADD. However at max REDIS_AOF_REWRITE_ITEMS_PER_CMD items per time
 * are inserted using a single command. */
int rewriteAppendOnlyFile(char *filename) {
//rewriteAppendOnlyFileBackground調用這裏,將文件寫入aof文件裏面去。
    dictIterator *di = NULL;
    dictEntry *de;
    rio aof;
    FILE *fp;
    char tmpfile[256];
    int j;
    long long now = mstime();

    /* Note that we have to use a different temp name here compared to the
     * one used by rewriteAppendOnlyFileBackground() function. */
    snprintf(tmpfile,256,"temp-rewriteaof-%d.aof", (int) getpid());
    fp = fopen(tmpfile,"w");
    if (!fp) {
        redisLog(REDIS_WARNING, "Opening the temp file for AOF rewrite in rewriteAppendOnlyFile(): %s", strerror(errno));
        return REDIS_ERR;
    }
    //設置rioFileIO等信息
    rioInitWithFile(&aof,fp);
    if (server.aof_rewrite_incremental_fsync)//設置r->io.file.autosync = bytes;每32M刷新一次。
        rioSetAutoSync(&aof,REDIS_AOF_AUTOSYNC_BYTES);
    for (j = 0; j < server.dbnum; j++) {//遍歷每一個db.將其內容寫入磁盤。
        char selectcmd[] = "*2\r\n6\r\nSELECT\r\n";
        redisDb *db = server.db+j;
        dict *d = db->dict;//找到這個db的key字典
        if (dictSize(d) == 0) continue;
        di = dictGetSafeIterator(d);
        if (!di) {
            fclose(fp);
            return REDIS_ERR;
        }

        /* SELECT the new DB */
        //寫入select,後面寫入當前所指的db序號。這樣就寫入: SELECT db_id
        if (rioWrite(&aof,selectcmd,sizeof(selectcmd)-1) == 0) goto werr;
        if (rioWriteBulkLongLong(&aof,j) == 0) goto werr;

        /* Iterate this DB writing every entry */
        while((de = dictNext(di)) != NULL) {//一個個遍歷這個字典的所有key,將其寫到AOF文件裏面去。
            sds keystr;
            robj key, *o;
            long long expiretime;

            keystr = dictGetKey(de);
            o = dictGetVal(de);
            initStaticStringObject(key,keystr);//初始化一個字符串對象。

            expiretime = getExpire(db,&key);//獲取超時時間。

            /* Save the key and associated value */
            if (o->type == REDIS_STRING) {
                //插入KV賦值語句: set keystr valuestr
                /* Emit a SET command */
                char cmd[]="*3\r\n3\r\nSET\r\n";
                if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
                /* Key and value */
                if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
                if (rioWriteBulkObject(&aof,o) == 0) goto werr;
            } else if (o->type == REDIS_LIST) {
                if (rewriteListObject(&aof,&key,o) == 0) goto werr;
            } else if (o->type == REDIS_SET) {
                if (rewriteSetObject(&aof,&key,o) == 0) goto werr;
            } else if (o->type == REDIS_ZSET) {
                if (rewriteSortedSetObject(&aof,&key,o) == 0) goto werr;
            } else if (o->type == REDIS_HASH) {
                if (rewriteHashObject(&aof,&key,o) == 0) goto werr;
            } else {
                redisPanic("Unknown object type");
            }
            /* Save the expire time */
            if (expiretime != -1) {
                char cmd[]="*3\r\n9\r\nPEXPIREAT\r\n";
                /* If this key is already expired skip it */
                if (expiretime < now) continue;
                if (rioWrite(&aof,cmd,sizeof(cmd)-1) == 0) goto werr;
                if (rioWriteBulkObject(&aof,&key) == 0) goto werr;
                if (rioWriteBulkLongLong(&aof,expiretime) == 0) goto werr;
            }
        }
        dictReleaseIterator(di);
    }
    /* Make sure data will not remain on the OS‘s output buffers */
    // 沖洗並關閉新 AOF 文件
    if (fflush(fp) == EOF) goto werr;
    if (aof_fsync(fileno(fp)) == -1) goto werr;
    if (fclose(fp) == EOF) goto werr;

    
    if (rename(tmpfile,filename) == -1) {
        redisLog(REDIS_WARNING,"Error moving temp append only file on the final destination: %s", strerror(errno));
        unlink(tmpfile);
        return REDIS_ERR;
    }

    redisLog(REDIS_NOTICE,"SYNC append only file rewrite performed");

    return REDIS_OK;

werr:
    fclose(fp);
    unlink(tmpfile);
    redisLog(REDIS_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
    if (di) dictReleaseIterator(di);
    return REDIS_ERR;
}

值得註意的是,這裏的寫文件是用標準庫寫入的,為什麽呢?緩存,能夠充分利用標準庫的緩存機制,這樣不用每次調用都調用系統調用。如果用戶配置了“aof-rewrite-incremental-fsync on”,則表示要fwrite寫一部分數據後就調用fsync刷一下數據到磁盤。這裏會每fwrite 32M(REDIS_AOF_AUTOSYNC_BYTES宏)數據後,就顯示調用一次fsync,保證數據寫入正確。

/* Make sure data will not remain on the OS‘s output buffers */
    // 沖洗並關閉新 AOF 文件
    if (fflush(fp) == EOF) goto werr;
    if (aof_fsync(fileno(fp)) == -1) goto werr;
    if (fclose(fp) == EOF) goto werr;

    
    if (rename(tmpfile,filename) == -1) {
        redisLog(REDIS_WARNING,"Error moving temp append only file on the final destination: %s", strerror(errno));
        unlink(tmpfile);
        return REDIS_ERR;
    }

    redisLog(REDIS_NOTICE,"SYNC append only file rewrite performed");

    return REDIS_OK;

werr:
    fclose(fp);
    unlink(tmpfile);
    redisLog(REDIS_WARNING,"Write error writing append only file on disk: %s", strerror(errno));
    if (di) dictReleaseIterator(di);
    return REDIS_ERR;
}

在AOF重寫的最後,rewriteAppendOnlyFile將快照進程的數據寫到磁盤裏面去之後,關閉文件,然後退出。那麽在AOF重寫的過程中,執行的Redis命令怎麽處理呢?答案就是在上面AOF寫入的時候提到過的,在AOF寫入的最後會把當前的命令放入AOF重寫緩沖區。但是有個問題,這個操作只是把數據放入緩沖區,而沒有flush到磁盤的AOF文件中。查閱代碼,發現是在定時任務serverCron中完成的。這個函數頂是每隔一毫秒調用。這是initServer函數調用如下命令設置的每毫秒定時器:aeCreateTimeEvent(server.el, 1, serverCron, NULL, NULL)。
serverCron函數比較長,跟我們相關的就一個if-else分支,條件是是否有快照進程在做AOF rewrite操作。
如果有快照進程或rdb進程在刷快照到磁盤,那麽wait3()看一下是否結束,如果結束就做響應的掃尾工作;

/* Check if a background saving or AOF rewrite in progress terminated. */
if (server.rdb_child_pid != -1 || server.aof_child_pid != -1) {
    int statloc;
    pid_t pid;

    if ((pid = wait3(&statloc,WNOHANG,NULL)) != 0) {
        int exitcode = WEXITSTATUS(statloc);
        int bysignal = 0;

        if (WIFSIGNALED(statloc)) bysignal = WTERMSIG(statloc);

        if (pid == server.rdb_child_pid) {
//把數據保存到磁盤上去,跟AOF的區別是AOF會不斷的追加改動到文件。
//RDB只會將快照保存,並且通知其他slave
            backgroundSaveDoneHandler(exitcode,bysignal);
        } else if (pid == server.aof_child_pid) {
        //退出的進程的pid為aof日誌的進程,也就是在rewriteAppendOnlyFileBackground這裏fork創建的進程
        //用戶敲入這樣的命令可以出發AOF文件重寫 config set appendonly yes
        //從而在定時任務中檢測到AOF進程已經寫完快照並退出,從而下面必須寫在此期間寫入的數據到文件。
            backgroundRewriteDoneHandler(exitcode,bysignal);
        } else {
            redisLog(REDIS_WARNING,
                "Warning, detected child with unmatched pid: %ld",
                (long)pid);
        }
        updateDictResizePolicy();
    }
} else {

在這裏,如果判斷當前AOF的重寫已經結束,則調用backgroundRewriteDoneHandler函數來將AOF緩存區中的數據寫入新的AOF文件。

void backgroundRewriteDoneHandler(int exitcode, int bysignal) {
    if (!bysignal && exitcode == 0) {
        int newfd, oldfd;
        char tmpfile[256];
        long long now = ustime();

        redisLog(REDIS_NOTICE,
            "Background AOF rewrite terminated with success");

        /* Flush the differences accumulated by the parent to the
         * rewritten AOF. */
        // 打開保存新 AOF 文件內容的臨時文件
        snprintf(tmpfile,256,"temp-rewriteaof-bg-%d.aof",
            (int)server.aof_child_pid);
        newfd = open(tmpfile,O_WRONLY|O_APPEND);
        if (newfd == -1) {
            redisLog(REDIS_WARNING,
                "Unable to open the temporary AOF produced by the child: %s", strerror(errno));
            goto cleanup;
        }

        // 將累積的重寫緩存寫入到臨時文件中
        // 這個函數調用的 write 操作會阻塞主進程
        if (aofRewriteBufferWrite(newfd) == -1) {
            redisLog(REDIS_WARNING,
                "Error trying to flush the parent diff to the rewritten AOF: %s", strerror(errno));
            close(newfd);
            goto cleanup;
        }

        redisLog(REDIS_NOTICE,
            "Parent diff successfully flushed to the rewritten AOF (%lu bytes)", aofRewriteBufferSize());

        if (server.aof_fd == -1) {
            /* AOF disabled */

             /* Don‘t care if this fails: oldfd will be -1 and we handle that.
              * One notable case of -1 return is if the old file does
              * not exist. */
             oldfd = open(server.aof_filename,O_RDONLY|O_NONBLOCK);
        } else {
            /* AOF enabled */
            oldfd = -1; /* We‘ll set this to the current AOF filedes later. */
        }

        /* Rename the temporary file. This will not unlink the target file if
         * it exists, because we reference it with "oldfd".
         *
         * 對臨時文件進行改名,替換現有的 AOF 文件。
         *
         * 舊的 AOF 文件不會在這裏被 unlink ,因為 oldfd 引用了它。
         */
        if (rename(tmpfile,server.aof_filename) == -1) {
            redisLog(REDIS_WARNING,
                "Error trying to rename the temporary AOF file: %s", strerror(errno));
            close(newfd);
            if (oldfd != -1) close(oldfd);
            goto cleanup;
        }

        if (server.aof_fd == -1) {
            /* AOF disabled, we don‘t need to set the AOF file descriptor
             * to this new file, so we can close it.
             *
             * AOF 被關閉,直接關閉 AOF 文件,
             * 因為關閉 AOF 本來就會引起阻塞,所以這裏就算 close 被阻塞也無所謂
             */
            close(newfd);
        } else {
            /* AOF enabled, replace the old fd with the new one.
             *
             * 用新 AOF 文件的 fd 替換原來 AOF 文件的 fd
             */
            oldfd = server.aof_fd;
            server.aof_fd = newfd;

            // 因為前面進行了 AOF 重寫緩存追加,所以這裏立即 fsync 一次
            if (server.aof_fsync == AOF_FSYNC_ALWAYS)
                aof_fsync(newfd);
            else if (server.aof_fsync == AOF_FSYNC_EVERYSEC)
                aof_background_fsync(newfd);

            // 強制引發 SELECT
            server.aof_selected_db = -1; /* Make sure SELECT is re-issued */

            // 更新 AOF 文件的大小
            aofUpdateCurrentSize();

            // 記錄前一次重寫時的大小
            server.aof_rewrite_base_size = server.aof_current_size;

            /* Clear regular AOF buffer since its contents was just written to
             * the new AOF from the background rewrite buffer.
             *
             * 清空 AOF 緩存,因為它的內容已經被寫入過了,沒用了
             */
            sdsfree(server.aof_buf);
            server.aof_buf = sdsempty();
        }

        server.aof_lastbgrewrite_status = REDIS_OK;

        redisLog(REDIS_NOTICE, "Background AOF rewrite finished successfully");

        /* Change state from WAIT_REWRITE to ON if needed
         *
         * 如果是第一次創建 AOF 文件,那麽更新 AOF 狀態
         */
        if (server.aof_state == REDIS_AOF_WAIT_REWRITE)
            server.aof_state = REDIS_AOF_ON;

        /* Asynchronously close the overwritten AOF.
         *
         * 異步關閉舊 AOF 文件
         */
        if (oldfd != -1) bioCreateBackgroundJob(REDIS_BIO_CLOSE_FILE,(void*)(long)oldfd,NULL,NULL);

        redisLog(REDIS_VERBOSE,
            "Background AOF rewrite signal handler took %lldus", ustime()-now);

    // BGREWRITEAOF 重寫出錯
    } else if (!bysignal && exitcode != 0) {
        server.aof_lastbgrewrite_status = REDIS_ERR;

        redisLog(REDIS_WARNING,
            "Background AOF rewrite terminated with error");

    // 未知錯誤
    } else {
        server.aof_lastbgrewrite_status = REDIS_ERR;

        redisLog(REDIS_WARNING,
            "Background AOF rewrite terminated by signal %d", bysignal);
    }

cleanup:

    // 清空 AOF 緩沖區
    aofRewriteBufferReset();

    // 移除臨時文件
    aofRemoveTempFile(server.aof_child_pid);

    // 重置默認屬性
    server.aof_child_pid = -1;
    server.aof_rewrite_time_last = time(NULL)-server.aof_rewrite_time_start;
    server.aof_rewrite_time_start = -1;

    /* Schedule a new rewrite if we are waiting for it to switch the AOF ON. */
    if (server.aof_state == REDIS_AOF_WAIT_REWRITE)
        server.aof_rewrite_scheduled = 1;
}

在函數裏面,會調用aofRewriteBufferWrite把AOF重寫緩沖區裏面的數據寫入新的AOF文件中。 在完成之後調用一次fsync將數據刷到磁盤中。在完成所有工作之後,就是替換原來的文件為新文件。至此AOF重寫的過程全部完成。

/* Write the buffer (possibly composed of multiple blocks) into the specified
 * fd. If a short write or any other error happens -1 is returned,
 * otherwise the number of bytes written is returned.-
 *
 * 將重寫緩存中的所有內容(可能由多個塊組成)寫入到給定 fd 中。
 *
 * 如果沒有 short write 或者其他錯誤發生,那麽返回寫入的字節數量,
 * 否則,返回 -1 。
 */
ssize_t aofRewriteBufferWrite(int fd) {
    listNode *ln;
    listIter li;
    ssize_t count = 0;

    // 遍歷所有緩存塊
    listRewind(server.aof_rewrite_buf_blocks,&li);
    while((ln = listNext(&li))) {
        aofrwblock *block = listNodeValue(ln);
        ssize_t nwritten;

        if (block->used) {

            // 寫入緩存塊內容到 fd
            nwritten = write(fd,block->buf,block->used);
            if (nwritten != block->used) {
                if (nwritten == 0) errno = EIO;
                return -1;
            }

            // 積累寫入字節
            count += nwritten;
        }
    }

    return count;
}

redis RDB 和AOF