關於TCP CORK的一個細節
國慶長假第二天,研究擁塞控制的絕佳時機。我暫時沒錢去非洲觀測角馬斑馬在獅子和鱷魚虎視眈眈的註目下遷徙,但我可以在家門口觀測更壯觀的…好久沒有寫點TCP的東西了,只是看著國慶大堵車,喝著啤酒,就想起了TCP,無假期,不TCP,那就整點兒唄…
很多人都知道TCP的Nagle算法,但知道TCP_CORK的就相對比較少了,一句話,TCP_CORK可以認為是Nagle的增強。和Nagle隱式地不發小包不同,TCP_CORK是顯式地阻止小包發送,這個從其名字上也能看得出來,只要用戶態沒有顯式地拔掉塞子,最後遺留的不足MSS的數據包將始終發不出去!
??這只是設計者的想法,但現實中真的是這樣嗎?萬一編程者忘了拔塞子怎麽辦?如何來容錯?(不能由於編程錯誤而造成協議層面上的詭異行為,TCP應該是魯棒的。
很多文章都說了,TCP協議棧的實現會等待200ms的時間,期間如果沒人把塞子拔掉,就把遺留的哪怕不足一個MSS的數據包無條件發送出去,這確實增加了系統的魯棒性,但是這200ms的時間差從何說起呢?為什麽是200ms呢?
??事實上,Linux的TCP_CORK實現中根本就沒有200ms一說,所謂200ms只是說TCP連接的最小RTO是200ms,而TCP_CORK的超時發送時間正是一個RTO-而不是200ms!
??如何證實呢?
我承認我不喜歡perf stack track那種套路,在我看來,簡單的事情上動用perf,那一點都不方便,反而帶來了很大的時間成本,但我並非在鼓勵大家自己造輪子,我只是在形而上的意義上不喜歡這種繁復的洛可可風格而已,我喜歡自己動手,短平快!所以我選擇了基於最基本的tcp_probe範式自己寫jprobe來跟蹤stack。
為了證實為什麽TCP_CORK的超時發送間隔是RTO這件事,我寫了下面的packetdrill腳本:
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1000, sackOK, nop, nop, nop, wscale 7>
+0 > S. 0:0(0) ack 1 <...>
+.1 < . 1 :1(0) ack 1 win 257
+0 accept(3, ..., ...) = 4
+0 setsockopt(4, IPPROTO_TCP, TCP_NODELAY, [0], 4) = 0
// 設置TCP_CORK,加塞子
+0 setsockopt(4, IPPROTO_TCP, TCP_CORK, [1], 4) = 0
// 開始發送滿MSS數據
+0 write(4, ..., 1000) = 1000
// 將ACK延遲,旨在讓RTO變大
+0.350 < . 1:1(0) ack 1001 win 10000
+0 write(4, ..., 1000) = 1000
+0.350 < . 1:1(0) ack 2001 win 10000
// 以下打印rto的值
+0 %{ print tcpi_rto }%
+0 %{ print tcpi_rtt }%
// 這裏是關鍵,發送一個長度只有10字節的小包,由於CORK的阻滯,它一定會延遲發送
+0 write(4, ..., 10) = 10
+0.40 < . 1:1(0) ack 2011 win 10000
// 延時觀察
+2.960 write(4, ..., 10000) = 10000
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
跑一下腳本,抓包看最後一個10字節的包發送時間戳和前面時間戳之差,就會發現它恰哈就等於packetdrill腳本打印出來的RTO,這裏由於我馬上要出發汕頭,就不貼圖了。
??RTO大概是560ms的時候,你會發現CORK的超時發送間隔遠超200ms,並不是固定的200ms!
??代碼裏沒有任何秘密。
??我們看一下tcp的幾個定時器,分別是:
#define ICSK_TIME_RETRANS 1 /* Retransmit timer */
#define ICSK_TIME_DACK 2 /* Delayed ack timer */
#define ICSK_TIME_PROBE0 3 /* Zero window probe timer */
#define ICSK_TIME_EARLY_RETRANS 4 /* Early retransmit timer */
#define ICSK_TIME_LOSS_PROBE 5 /* Tail loss probe timer */
- 1
- 2
- 3
- 4
- 5
那麽,CORK定時器會是哪個呢?在和同事討論這個問題的時候,我隱約覺得之前碰到過這個問題,確實也碰到過,於是我搜索了我的博客文章:
《UDP_CORK,TCP_CORK以及TCP_NODELAY》
2010年的事了,誰會記得那麽久遠的技術細節,好在當時寫下了些東西…
??這篇文章提到了ICSK_TIME_PROBE0正是延時發送被TCP_CORK阻滯數據的定時器。其中是tcp_write_wakeup這個函數進行了實際的發送。為了探究這個定時器的超時時間,我寫了下面的probe代碼:
void jsk_reset_timer(struct sock *sk, struct timer_list* timer,
unsigned long expires)
{
struct tcp_sock *tp = tcp_sk(sk);
struct inet_sock *inet = inet_sk(sk);
if (ntohs(inet->inet_dport) == port || ntohs(inet->inet_sport) == port) {
struct inet_connection_sock *icsk = inet_csk(sk);
if (&icsk->icsk_retransmit_timer == timer) {
printk("#####:%d %d %d %d\n", icsk->icsk_pending, jiffies_to_msecs(tcp_probe0_when2(sk, (unsigned)(120*HZ))), tcp_probe0_base2(sk), icsk->icsk_timeout);
printk("#####:%d %d %d %d\n", jiffies_to_msecs(TCP_RTO_MIN), TCP_RTO_MIN, TCP_RTO_MAX, HZ);
if (icsk->icsk_pending == ICSK_TIME_PROBE0/*也就是數值3*/)
dump_stack();
}
}
jprobe_return();
}
static struct jprobe tcp_jprobe = {
.kp = {
.symbol_name = "sk_reset_timer",
},
.entry = jsk_reset_timer,
};
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
通過stack可以看出是在__tcp_push_pending_frames這個函數中設置的定時器,即:
void __tcp_push_pending_frames(struct sock *sk, unsigned int cur_mss,
int nonagle)
{
/* If we are closed, the bytes will have to remain here.
* In time closedown will finish, we empty the write queue and
* all will be happy.
*/
if (unlikely(sk->sk_state == TCP_CLOSE))
return;
// xmit函數會返回True,因為TCP_CORK阻滯了發送,具體看tcp_nagle_test->tcp_nagle_check
if (tcp_write_xmit(sk, cur_mss, nonagle, 0,
sk_gfp_atomic(sk, GFP_ATOMIC)))
// 於是設置探測定時器
tcp_check_probe_timer(sk);
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
最後看一下這個probe探測的設置邏輯,也是很簡單的:
static inline void tcp_check_probe_timer(struct sock *sk)
{
const struct tcp_sock *tp = tcp_sk(sk);
const struct inet_connection_sock *icsk = inet_csk(sk);
// 這裏的條件完全符合Nagle/Cork的語義
if (!tp->packets_out && !icsk->icsk_pending)
inet_csk_reset_xmit_timer(sk, ICSK_TIME_PROBE0,
// 註意,超時時間是RTO
icsk->icsk_rto, TCP_RTO_MAX);
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
說實話,Cork的超時發送使用PROBE0這個名字,確實有點詞不達意,但這風格我們早就習慣了…
最後,我們來看一個更加細微的細節,那就是最小RTO相關的細節。
??我們知道,RTO基於RTT來計算,而這裏的RTT實際上是采集到的實時RTT的移動指數平均平滑值,也就是說,歷史的RTT值在平滑值中會有一定的份額,那麽可想而知,即便是在本機到本機的這種超高速總線環境,一開始的RTT也並不是實際值,而是預設的經驗值,為了讓RTT區域達到真實值,就需要讓指數平均多移動一會兒,為此則必須多發送些數據:
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 1) = 0
+0 < S 0:0(0) win 32792 <mss 1000, sackOK, nop, nop, nop, wscale 7>
+0 > S. 0:0(0) ack 1 <...>
+.1 < . 1:1(0) ack 1 win 257
+0 accept(3, ..., ...) = 4
+0 setsockopt(4, IPPROTO_TCP, TCP_NODELAY, [0], 4) = 0
+0 setsockopt(4, IPPROTO_TCP, TCP_CORK, [1], 4) = 0
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 1001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 2001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 3001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 4001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 4001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 5001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 6001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 7001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 8001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 9001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 10001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 11001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 12001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 13001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 14001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 15001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 16001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 17001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 18001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 19001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 20001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 21001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 22001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 23001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 24001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 25001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 26001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 27001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 28001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 29001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 30001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 31001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 32001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 33001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 34001 win 10000
+0 write(4, ..., 1000) = 1000
+0.0 < . 1:1(0) ack 35001 win 10000
+0 %{ print tcpi_rto }%
+0 %{ print tcpi_rtt }%
// 以上之所以發送那麽多數據,只是讓RTT平穩!因為在握手期間,RTT是猜的,越多的數據傳輸,RTT就越準確,從而RTO也就越合理。
// 這裏是關鍵,發送一個長度只有10字節的小包
+0 write(4, ..., 10) = 10
+0.40 < . 1:1(0) ack 35011 win 10000
+2.960 write(4, ..., 10000) = 10000
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
這個時候,還是用probe程序,我們打印最小RTO的值:
void jsk_reset_timer(struct sock *sk, struct timer_list* timer,
unsigned long expires)
{
struct tcp_sock *tp = tcp_sk(sk);
struct inet_sock *inet = inet_sk(sk);
if (ntohs(inet->inet_dport) == port || ntohs(inet->inet_sport) == port) {
struct inet_connection_sock *icsk = inet_csk(sk);
if (&icsk->icsk_retransmit_timer == timer) {
printk("#####:%d %d %d %d\n", icsk->icsk_pending, jiffies_to_msecs(tcp_probe0_when2(sk, (unsigned)(120*HZ))), tcp_probe0_base2(sk), icsk->icsk_timeout);
printk("#####:%d %d %d %d\n", jiffies_to_msecs(TCP_RTO_MIN), TCP_RTO_MIN, TCP_RTO_MAX, HZ);
if (icsk->icsk_pending == ICSK_TIME_PROBE0/*也就是數值3*/)
dump_stack();
}
}
jprobe_return();
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
發現它就是最小的大於200ms的一個值,這裏的200ms由下面的宏來定義:
#define TCP_RTO_MIN ((unsigned)(HZ/5))
- 1
取決於HZ。請註意,這裏的TCP_RTO_MIN的單位並不是指ms,而是時鐘滴答,換算成ms,需要下面的操作:
unsigned int jiffies_to_msecs(const unsigned long j)
{
#if HZ <= MSEC_PER_SEC && !(MSEC_PER_SEC % HZ)
return (MSEC_PER_SEC / HZ) * j;
#elif HZ > MSEC_PER_SEC && !(HZ % MSEC_PER_SEC)
return (j + (HZ / MSEC_PER_SEC) - 1)/(HZ / MSEC_PER_SEC);
#else
# if BITS_PER_LONG == 32
return (HZ_TO_MSEC_MUL32 * j) >> HZ_TO_MSEC_SHR32;
# else
return (j * HZ_TO_MSEC_NUM) / HZ_TO_MSEC_DEN;
# endif
#endif
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
很亂,但最終把TCP_RTO_MIN代入後,所得的值就是(200+$一個滴答的毫秒數)ms,對於HZ250而言,他就是204ms。為什麽是200ms而不是400ms或者20ms呢?我想這又是一個當年的經驗值,根據當時的網絡數值統計而得出的,類似MSL這種吧。有段註釋挺有意思:
/* Something is really bad, we could not queue an additional packet,
* because qdisc is full or receiver sent a 0 window.
* We do not want to add fuel to the fire, or abort too early,
* so make sure the timer we arm now is at least 200ms in the future,
* regardless of current icsk_rto value (as it could be ~2ms)
*/
static inline unsigned long tcp_probe0_base(const struct sock *sk)
{
return max_t(unsigned long, inet_csk(sk)->icsk_rto, TCP_RTO_MIN);
}
為什麽要延遲200ms?因為不能火上澆油
??非常有寓意,非常之深刻,不要火上澆油,希望國慶假期期間的司機們能理解這段註釋,每逢假期這可是我研究擁塞控制的絕佳機會…
??有意思,有意思。
再分享一下我老師大神的人工智能教程吧。零基礎!通俗易懂!風趣幽默!還帶黃段子!希望你也加入到我們人工智能的隊伍中來!https://blog.csdn.net/jiangjunshow
關於TCP CORK的一個細節