GraphSAGE 程式碼解析(三) - aggregators.py
1. class MeanAggregator(Layer):
該類主要用於實現
1. __init__()
__init_() 用於獲取並初始化成員變數 dropout, bias(False), act(ReLu), concat(False), input_dim, output_dim, name(Variable scopr)
用glorot()方法初始化節點v的權值矩陣 vars['self_weights'] 和鄰居節點均值u的權值矩陣 vars['neigh_weights']
用零向量初始化vars['bias']。(見inits.py: zeros(shape))
若logging為True,則呼叫 layers.py 中 class Layer()的成員函式_log_vars(), 生成vars中各個變數的直方圖。
glorot()
其中,glorot() 在inits.py中定義,用於權值初始化。(from .inits import glorot)
均勻分佈初始化方法,又稱Xavier均勻初始化,引數從 [-limit, limit] 的均勻分佈產生,其中limit為 sqrt(6 / (fan_in + fan_out))。fan_in為權值張量的輸入單元數,fan_out是權重張量的輸出單元數。該函式返回 [fan_in, fan_out]大小的Variable。
1 def glorot(shape, name=None): 2 """Glorot & Bengio (AISTATS 2010) init.""" 3 init_range = np.sqrt(6.0/(shape[0]+shape[1])) 4 initial = tf.random_uniform(shape, minval=-init_range, maxval=init_range, dtype=tf.float32) 5 return tf.Variable(initial, name=name)View Code
2. _call(inputs)
class MeanAggregator(Layer) 中的 _call(inputs) 函式是對父類class Layer(object)方法_call(inputs)的重寫。
用於實現最上方的迭代更新式子。
在layer.py 中定義的 class Layer(object)中,執行特殊函式def __call__(inputs) 時有: outputs = self._call(inputs)呼叫_call(inputs) 方法,也即在這裡呼叫子類MeanAggregator(Layer)中的_call(inputs)方法。
tf.nn.dropout(x, keep_prob, noise_shape=None, seed=None, name=None)
With probability keep_prob, outputs the input element scaled up by 1 / keep_prob, otherwise outputs 0. The scaling is so that the expected sum is unchanged.
注意:輸出的非0元素是原來的 “1/keep_prob” 倍,以保證總和不變。
tf.add_n(inputs, name=None)
Adds all input tensors element-wise. Args: inputs: A list of Tensor or IndexedSlices objects, each with same shape and type. name: A name for the operation (optional). Returns: A Tensor of same shape and type as the elements of inputs. Raises: ValueError: If inputs don't all have same shape and dtype or the shape cannot be inferred.View Code
output = tf.concat([from_self, from_neighs], axis=1)
這裡注意在concat後其維數變為之前的2倍。
3. class MeanAggregator(Layer) 程式碼
1 class MeanAggregator(Layer): 2 """ 3 Aggregates via mean followed by matmul and non-linearity. 4 """ 5 6 def __init__(self, input_dim, output_dim, neigh_input_dim=None, 7 dropout=0., bias=False, act=tf.nn.relu, 8 name=None, concat=False, **kwargs): 9 super(MeanAggregator, self).__init__(**kwargs) 10 11 self.dropout = dropout 12 self.bias = bias 13 self.act = act 14 self.concat = concat 15 16 if neigh_input_dim is None: 17 neigh_input_dim = input_dim 18 19 if name is not None: 20 name = '/' + name 21 else: 22 name = '' 23 24 with tf.variable_scope(self.name + name + '_vars'): 25 self.vars['neigh_weights'] = glorot([neigh_input_dim, output_dim], 26 name='neigh_weights') 27 self.vars['self_weights'] = glorot([input_dim, output_dim], 28 name='self_weights') 29 if self.bias: 30 self.vars['bias'] = zeros([self.output_dim], name='bias') 31 32 if self.logging: 33 self._log_vars() 34 35 self.input_dim = input_dim 36 self.output_dim = output_dim 37 38 def _call(self, inputs): 39 self_vecs, neigh_vecs = inputs 40 41 neigh_vecs = tf.nn.dropout(neigh_vecs, 1-self.dropout) 42 self_vecs = tf.nn.dropout(self_vecs, 1-self.dropout) 43 neigh_means = tf.reduce_mean(neigh_vecs, axis=1) 44 45 # [nodes] x [out_dim] 46 from_neighs = tf.matmul(neigh_means, self.vars['neigh_weights']) 47 48 from_self = tf.matmul(self_vecs, self.vars["self_weights"]) 49 50 if not self.concat: 51 output = tf.add_n([from_self, from_neighs]) 52 else: 53 output = tf.concat([from_self, from_neighs], axis=1) 54 55 # bias 56 if self.bias: 57 output += self.vars['bias'] 58 59 return self.act(output)View Code
2. class GCNAggregator(Layer)
這裡__init__()與MeanAggregator基本相同,在_call()的實現中略有不同。
1 def _call(self, inputs): 2 self_vecs, neigh_vecs = inputs 3 4 neigh_vecs = tf.nn.dropout(neigh_vecs, 1-self.dropout) 5 self_vecs = tf.nn.dropout(self_vecs, 1-self.dropout) 6 means = tf.reduce_mean(tf.concat([neigh_vecs, 7 tf.expand_dims(self_vecs, axis=1)], axis=1), axis=1) 8 9 # [nodes] x [out_dim] 10 output = tf.matmul(means, self.vars['weights']) 11 12 # bias 13 if self.bias: 14 output += self.vars['bias'] 15 16 return self.act(output)View Code
其中對means求解時,
1. 先將self_vecs行列轉換(tf.expand_dims(self_vecs, axis=1)),
2. 之後self_vecs的行數與neigh_vecs行數相同時,將二者concat, 即相當於在原先的neigh_vecs矩陣後面新增一列self_vecs的轉置
3. 最後將得到的矩陣每行求均值,即得means.
之後means與權值矩陣vars['weights']求內積,並加上vars['bias'], 最終將該值帶入啟用函式(ReLu)。
下面舉個例子簡單說明(例子中省略了點乘W的操作):
1 import tensorflow as tf 2 3 neigh_vecs = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] 4 self_vecs = [2, 3, 4] 5 6 means = tf.reduce_mean(tf.concat([neigh_vecs, 7 tf.expand_dims(self_vecs, axis=1)], axis=1), axis=1) 8 9 print(tf.shape(self_vecs)) 10 11 print(tf.expand_dims(self_vecs, axis=0)) 12 # Tensor("ExpandDims_1:0", shape=(1, 3), dtype=int32) 13 14 print(tf.expand_dims(self_vecs, axis=1)) 15 # Tensor("ExpandDims_2:0", shape=(3, 1), dtype=int32) 16 17 sess = tf.Session() 18 print(sess.run(tf.expand_dims(self_vecs, axis=1))) 19 # [[2] 20 # [3] 21 # [4]] 22 23 print(sess.run(tf.concat([neigh_vecs, 24 tf.expand_dims(self_vecs, axis=1)], axis=1))) 25 # [[1 2 3 2] 26 # [4 5 6 3] 27 # [7 8 9 4]] 28 29 print(means) 30 # Tensor("Mean:0", shape=(3,), dtype=int32) 31 32 print(sess.run(tf.reduce_mean(tf.concat([neigh_vecs, 33 tf.expand_dims(self_vecs, axis=1)], axis=1), axis=1))) 34 # [2 4 7] 35 36 # [[1 2 3 2] = 8 // 4 = 2 37 # [4 5 6 3] = 18 // 4 = 4 38 # [7 8 9 4]] = 28 // 4 = 7 39 40 bias = [1] 41 output = means + bias 42 print(sess.run(output)) 43 # [3 5 8] 44 # [2 + 1, 4 + 1, 7 + 1] = [3, 5, 8]View Code