1. 程式人生 > 其它 >圖學習學術速遞[2021/10/8]

圖學習學術速遞[2021/10/8]

Graph相關(圖學習|圖神經網路|圖優化等)(4篇)

[ 1 ] Joint inference of multiple graphs with hidden variables from stationary graph signals
標題:基於平穩圖訊號的多個隱變數圖的聯合推斷
連結:https://arxiv.org/abs/2110.03666

作者:Samuel Rey,Andrei Buciulea,Madeline Navarro,Santiago Segarra,Antonio G. Marques

機構:⋆Dept. of Signal Theory and Communications, King Juan Carlos University, Madrid, Spain, †Dept. of Electrical and Computer Engineering, Rice University, Houston, USA

備註:Paper submitted to ICASSP 2022

摘要:從節點觀測集學習圖是一個突出的問題,正式稱為圖拓撲推理。然而,目前的方法通常侷限於推斷單個網路,並且假設所有節點的觀測都是可用的。首先,許多當代設定涉及多個相關網路,其次,通常情況下,只觀察到一部分節點,而其餘節點保持隱藏狀態。基於這些事實,我們介紹了一種聯合圖拓撲推理方法,該方法對隱藏變數的影響進行建模。在假設觀測訊號在所尋找的圖上是平穩的並且圖是密切相關的情況下,多個網路的聯合估計允許我們利用這種關係來提高學習圖的質量。此外,我們還面臨著一個具有挑戰性的問題,即如何對隱藏節點的影響進行建模,以使其有害影響最小化。為了獲得一種可行的方法,我們利用手頭的設定的特殊結構,並利用不同圖之間的相似性,這會影響觀察到的節點和隱藏的節點。為了驗證所提出的方法,對合成圖和真實圖進行了數值模擬。

摘要:Learning graphs from sets of nodal observations represents a prominent problem formally known as graph topology inference. However, current approaches are limited by typically focusing on inferring single networks, and they assume that observations from all nodes are available. First, many contemporary setups involve multiple related networks, and second, it is often the case that only a subset of nodes is observed while the rest remain hidden. Motivated by these facts, we introduce a joint graph topology inference method that models the influence of the hidden variables. Under the assumptions that the observed signals are stationary on the sought graphs and the graphs are closely related, the joint estimation of multiple networks allows us to exploit such relationships to improve the quality of the learned graphs. Moreover, we confront the challenging problem of modeling the influence of the hidden nodes to minimize their detrimental effect. To obtain an amenable approach, we take advantage of the particular structure of the setup at hand and leverage the similarity between the different graphs, which affects both the observed and the hidden nodes. To test the proposed method, numerical simulations over synthetic and real-world graphs are provided.

[ 2 ] Training Stable Graph Neural Networks Through Constrained Learning
標題:基於約束學習的穩定圖神經網路訓練
連結:https://arxiv.org/abs/2110.03576

作者:Juan Cervino,Luana Ruiz,Alejandro Ribeiro

機構:Department of Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, USA

摘要:圖神經網路(GNN)依靠圖卷積從網路資料中學習特徵。GNN對基礎圖的不同型別的擾動是穩定的,這是它們從圖過濾器繼承的特性。在本文中,我們利用GNNs的穩定性屬性作為型別點,以尋求分佈中穩定的表示。我們提出了一種新的約束學習方法,通過在選擇擾動範圍內對GNN的穩定性條件施加約束。我們在真實世界的資料中展示了我們的框架,證實了我們能夠獲得更穩定的表示,同時不影響預測的整體準確性。

摘要:Graph Neural Networks (GNN) rely on graph convolutions to learn features from network data. GNNs are stable to different types of perturbations of the underlying graph, a property that they inherit from graph filters. In this paper we leverage the stability property of GNNs as a typing point in order to seek for representations that are stable within a distribution. We propose a novel constrained learning approach by imposing a constraint on the stability condition of the GNN within a perturbation of choice. We showcase our framework in real world data, corroborating that we are able to obtain more stable representations while not compromising the overall accuracy of the predictor.

[ 3 ] Distributed Optimization of Graph Convolutional Network using Subgraph Variance
標題:基於子圖方差的圖卷積網路分散式優化
連結:https://arxiv.org/abs/2110.02987

作者:Taige Zhao,Xiangyu Song,Jianxin Li,Wei Luo,Imran Razzak

機構: Imran Razzak arewith the School of Information Technology, Deakin University

摘要:近年來,圖卷積網路(GCN)在從圖結構資料學習方面取得了巨大的成功。隨著圖形節點和邊的不斷增加,單處理器的GCN訓練已經不能滿足對時間和記憶體的需求,這導致了分散式GCN訓練框架的研究熱潮。然而,現有的分散式GCN訓練框架需要處理器之間的巨大通訊成本,因為需要從其他處理器收集和傳輸大量相關節點和邊緣資訊,以便進行GCN訓練。為了解決這個問題,我們提出了一個基於圖擴充的分散式GCN框架(GAD)。特別是,GAD有兩個主要元件,GAD分割槽和GAD優化器。我們首先提出了一種基於圖增強的分割槽(GAD分割槽),該分割槽可以將原始圖劃分為增強子圖,通過選擇和儲存儘可能少的其他處理器的重要節點來減少通訊,同時保證訓練的準確性。此外,我們進一步設計了基於子圖方差的重要性計算公式,並提出了一種新的加權全域性一致性方法,統稱為GAD優化器。該優化器自適應地降低具有大方差的子圖的重要性,以減少GAD劃分引入的額外方差對分散式GCN訓練的影響。在四個大規模真實資料集上進行的大量實驗表明,與最先進的方法相比,我們的框架顯著降低了通訊開銷(50%),提高了分散式GCN訓練的收斂速度(2倍),並在最小冗餘的基礎上略微提高了精度(0.45%)。

摘要:In recent years, Graph Convolutional Networks (GCNs) have achieved great success in learning from graph-structured data. With the growing tendency of graph nodes and edges, GCN training by single processor cannot meet the demand for time and memory, which led to a boom into distributed GCN training frameworks research. However, existing distributed GCN training frameworks require enormous communication costs between processors since multitudes of dependent nodes and edges information need to be collected and transmitted for GCN training from other processors. To address this issue, we propose a Graph Augmentation based Distributed GCN framework(GAD). In particular, GAD has two main components, GAD-Partition and GAD-Optimizer. We first propose a graph augmentation-based partition (GAD-Partition) that can divide original graph into augmented subgraphs to reduce communication by selecting and storing as few significant nodes of other processors as possible while guaranteeing the accuracy of the training. In addition, we further design a subgraph variance-based importance calculation formula and propose a novel weighted global consensus method, collectively referred to as GAD-Optimizer. This optimizer adaptively reduces the importance of subgraphs with large variances for the purpose of reducing the effect of extra variance introduced by GAD-Partition on distributed GCN training. Extensive experiments on four large-scale real-world datasets demonstrate that our framework significantly reduces the communication overhead (50%), improves the convergence speed (2X) of distributed GCN training, and slight gain in accuracy (0.45%) based on minimal redundancy compared to the state-of-the-art methods.

[ 4 ] A Few-shot Learning Graph Multi-Trajectory Evolution Network for Forecasting Multimodal Baby Connectivity Development from a Baseline Timepoint
標題:從基線時間點預測多模態嬰兒連通性發育的Few-Shot學習圖多軌跡進化網路
連結:https://arxiv.org/abs/2110.03535

作者:Alaa Bessadok,Ahmed Nebli,Mohamed Ali Mahjoub,Gang Li,Weili Lin,Dinggang Shen,Islem Rekik

機構:ID ,⋆, BASIRA Lab, Istanbul Technical University, Istanbul, Turkey, Higher Institute of Informatics and Communication Technologies, University of, National Engineering School of Sousse, University of Sousse, LATIS- Laboratory of

摘要:繪製嬰兒出生後第一年的連線體進化軌跡對於理解嬰兒大腦的動態連線發展起著至關重要的作用。這種分析需要獲取縱向連線組資料集。然而,由於各種困難,新生兒和產後掃描很少獲得。一小部分工作集中在預測嬰兒大腦的進化軌跡,從一個單一模式衍生的新生兒大腦連線體。儘管前景看好,但大型訓練資料集對於促進模型學習和從不同模式(即功能和形態連線體)推廣到多軌跡預測至關重要。在這裡,我們前所未有地探索這個問題:我們能否設計一些基於快照學習的框架來預測不同模式下的腦圖軌跡?為此,我們提出了一種圖形多軌跡進化網路(GmTE網路),它採用了教師-學生模式,教師網路在純新生兒大腦圖形上學習,學生網路在給定一組不同時間點的模擬大腦圖形上學習。據我們所知,這是第一個為腦圖多軌跡生長預測量身定製的師生架構,該架構基於少量鏡頭學習並推廣到圖形神經網路(GNN)。為了提高學生網路的效能,我們引入了一種區域性拓撲感知的蒸餾損失,它迫使學生網路的預測圖拓撲與教師網路一致。實驗結果表明,與基準測試方法相比,效能有了顯著提高。因此,我們的GmTE網路可以用來預測不同模式的非典型腦連線軌跡演變。我們的程式碼位於https://github.com/basiralab/GmTE-Net。

摘要:Charting the baby connectome evolution trajectory during the first year after birth plays a vital role in understanding dynamic connectivity development of baby brains. Such analysis requires acquisition of longitudinal connectomic datasets. However, both neonatal and postnatal scans are rarely acquired due to various difficulties. A small body of works has focused on predicting baby brain evolution trajectory from a neonatal brain connectome derived from a single modality. Although promising, large training datasets are essential to boost model learning and to generalize to a multi-trajectory prediction from different modalities (i.e., functional and morphological connectomes). Here, we unprecedentedly explore the question: Can we design a few-shot learning-based framework for predicting brain graph trajectories across different modalities? To this aim, we propose a Graph Multi-Trajectory Evolution Network (GmTE-Net), which adopts a teacher-student paradigm where the teacher network learns on pure neonatal brain graphs and the student network learns on simulated brain graphs given a set of different timepoints. To the best of our knowledge, this is the first teacher-student architecture tailored for brain graph multi-trajectory growth prediction that is based on few-shot learning and generalized to graph neural networks (GNNs). To boost the performance of the student network, we introduce a local topology-aware distillation loss that forces the predicted graph topology of the student network to be consistent with the teacher network. Experimental results demonstrate substantial performance gains over benchmark methods. Hence, our GmTE-Net can be leveraged to predict atypical brain connectivity trajectory evolution across various modalities. Our code is available at https: //github.com/basiralab/GmTE-Net.

因上求緣,果上努力~~~~ 作者:CBlair,轉載請註明原文連結:https://www.cnblogs.com/BlairGrowing/p/15381282.html