1. 程式人生 > 其它 >圖學習學術速遞[2021/10/13]

圖學習學術速遞[2021/10/13]

Graph相關(圖學習|圖神經網路|圖優化等)(4篇)

[ 1 ] GraPE: fast and scalable Graph Processing and Embedding
標題:GRAPE:快速可擴充套件的圖形處理與嵌入
連結:https://arxiv.org/abs/2110.06196

作者:Luca Cappelletti,Tommaso Fontana,Elena Casiraghi,Vida Ravanmehr,Tiffany J. Callahan,Marcin P. Joachimiak,Christopher J. Mungall,Peter N. Robinson,Justin Reese,Giorgio Valentini


機構:AnacletoLab, Dipartimento di Informatica, Universita degli Studi di Milano, Italy, The Jackson Laboratory for Genomic Medicine, Farmington, CT, USA, Lawrence Berkeley National Laboratory, USA, European Laboratory for Learning and Intelligent Systems (ELLIS)
摘要:圖形表示學習方法使得能夠以圖形形式表示的資料能夠解決廣泛的學習問題。然而,經濟、生物學、醫學和其他領域中的一些現實世界問題與現有方法及其軟體實現產生了相關的縮放問題,這是因為現實世界中的圖形具有數百萬個節點和數十億條邊。我們介紹了GraPE,一種用於圖形處理和基於隨機遊走的嵌入的軟體資源,它可以擴充套件到大型和高次圖形,並顯著加快計算速度。GraPE包括專門的資料結構、演算法和快速並行實現,與最先進的軟體資源相比,它在經驗空間和時間複雜性方面顯示了幾個數量級的改進,在邊緣和節點標籤預測以及圖形的無監督分析方面,機器學習方法的效能得到相應提升。GraPE設計用於在膝上型電腦和臺式電腦以及高效能運算叢集上執行

摘要:Graph Representation Learning methods have enabled a wide range of learning problems to be addressed for data that can be represented in graph form. Nevertheless, several real world problems in economy, biology, medicine and other fields raised relevant scaling problems with existing methods and their software implementation, due to the size of real world graphs characterized by millions of nodes and billions of edges. We present GraPE, a software resource for graph processing and random walk based embedding, that can scale with large and high-degree graphs and significantly speed up-computation. GraPE comprises specialized data structures, algorithms, and a fast parallel implementation that displays everal orders of magnitude improvement in empirical space and time complexity compared to state of the art software resources, with a corresponding boost in the performance of machine learning methods for edge and node label prediction and for the unsupervised analysis of graphs.GraPE is designed to run on laptop and desktop computers, as well as on high performance computing clusters

[ 2 ] ConTIG: Continuous Representation Learning on Temporal Interaction Graphs
標題:ConTIG:時態互動圖上的連續表示學習
連結:https://arxiv.org/abs/2110.06088

作者:Xu Yan,Xiaoliang Fan,Peizhen Yang,Zonghan Wu,Shirui Pan,Longbiao Chen,Yu Zang,Cheng Wang
備註:12 pages; 6 figures
摘要:基於時間互動圖的表示學習(TIG)是對複雜網路進行建模的一種方法,它能動態地演化出各種各樣的問題。現有的TIG動態嵌入方法僅在交互發生時對節點嵌入進行離散更新。它們無法捕捉節點嵌入軌跡的連續動態演化。在本文中,我們提出了一個名為ConTIG的兩模組框架,這是一種捕捉節點嵌入軌跡連續動態演化的連續表示方法。通過兩個基本模組,我們的模型利用了動態網路中的三個因素,包括最新互動、鄰居特徵和固有特徵。在第一個更新模組中,我們使用一個連續推理塊,通過使用常微分方程從節點對之間的時間相鄰互動模式學習節點的狀態軌跡。在第二個轉換模組中,我們引入了一種自我注意機制,通過聚合歷史時間互動資訊來預測未來的節點嵌入。實驗結果表明,與一系列最先進的基線相比,ConTIG在時間鏈路預測、時間節點推薦和動態節點分類任務方面具有優勢,特別是在長間隔互動預測方面。
摘要:Representation learning on temporal interaction graphs (TIG) is to model complex networks with the dynamic evolution of interactions arising in a broad spectrum of problems. Existing dynamic embedding methods on TIG discretely update node embeddings merely when an interaction occurs. They fail to capture the continuous dynamic evolution of embedding trajectories of nodes. In this paper, we propose a two-module framework named ConTIG, a continuous representation method that captures the continuous dynamic evolution of node embedding trajectories. With two essential modules, our model exploit three-fold factors in dynamic networks which include latest interaction, neighbor features and inherent characteristics. In the first update module, we employ a continuous inference block to learn the nodes' state trajectories by learning from time-adjacent interaction patterns between node pairs using ordinary differential equations. In the second transform module, we introduce a self-attention mechanism to predict future node embeddings by aggregating historical temporal interaction information. Experiments results demonstrate the superiority of ConTIG on temporal link prediction, temporal node recommendation and dynamic node classification tasks compared with a range of state-of-the-art baselines, especially for long-interval interactions prediction.

[ 3 ] SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in Breast Cancer
標題:SlideGraph+:整個幻燈片影象水平圖預測乳腺癌HER2狀態
連結:https://arxiv.org/abs/2110.06042

作者:Wenqi Lu,Michael Toss,Emad Rakha,Nasir Rajpoot,Fayyaz Minhas
機構:Tissue Image Analytics (TIA) Centre, Department of Computer Science, University of Warwick, UK, Nottingham Breast Cancer Research Centre, Division of Cancer and Stem Cells, School of Medicine, Nottingham City Hospital, University of Nottingham, Nottingham, UK
備註:20 pages, 11 figures, 3 tables
摘要:人表皮生長因子受體2(HER2)是一個重要的預後和預測因子,在15-20%的乳腺癌(BCa)中過度表達。確定其狀態是選擇治療方案和預測的關鍵臨床決策步驟。HER2狀態通過原位雜交(ISH)使用跨組學或免疫組織化學(IHC)進行評估,這需要額外的成本和組織負擔,以及手動觀察評分偏差方面的分析變數。在本研究中,我們提出了一種新的基於圖神經網路(GNN)的模型(稱為SlideGraph+),用於直接從常規蘇木精和伊紅(H&E)玻片的整個玻片影象預測HER2狀態。除了兩個獨立的測試資料集外,該網路還接受了來自癌症基因組圖譜(TCGA)的幻燈片的訓練和測試。我們證明,所提出的模型優於最先進的方法,在TCGA上ROC曲線下面積(AUC)值>0.75,在獨立測試集上>0.8。我們的實驗表明,所提出的方法可用於病例分類以及診斷環境中的預排序診斷測試。它也可用於計算病理學中的其他弱監督預測問題。SlideGraph+程式碼可從以下網址獲得:https://github.com/wenqi006/SlideGraph.
摘要:Human epidermal growth factor receptor 2 (HER2) is an important prognostic and predictive factor which is overexpressed in 15-20% of breast cancer (BCa). The determination of its status is a key clinical decision making step for selection of treatment regimen and prognostication. HER2 status is evaluated using transcroptomics or immunohistochemistry (IHC) through situ hybridisation (ISH) which require additional costs and tissue burden in addition to analytical variabilities in terms of manual observational biases in scoring. In this study, we propose a novel graph neural network (GNN) based model (termed SlideGraph+) to predict HER2 status directly from whole-slide images of routine Haematoxylin and Eosin (H&E) slides. The network was trained and tested on slides from The Cancer Genome Atlas (TCGA) in addition to two independent test datasets. We demonstrate that the proposed model outperforms the state-of-the-art methods with area under the ROC curve (AUC) values > 0.75 on TCGA and 0.8 on independent test sets. Our experiments show that the proposed approach can be utilised for case triaging as well as pre-ordering diagnostic tests in a diagnostic setting. It can also be used for other weakly supervised prediction problems in computational pathology. The SlideGraph+ code is available at https://github.com/wenqi006/SlideGraph.

[ 4 ] GCN-SE: Attention as Explainability for Node Classification in Dynamic Graphs
標題:GCN-SE:動態圖中節點分類的可解釋性關注
連結:https://arxiv.org/abs/2110.05598

作者:Yucai Fan,Yuhang Yao,Carlee Joe-Wong
機構:Carnegie Mellon University
備註:Accepted by ICDM 2021
摘要:圖卷積網路(GCNs)是一種流行的圖表示學習方法,已被證明對節點分類等任務有效。雖然典型的GCN模型側重於對靜態圖中的節點進行分類,但最近的幾個變體提出了拓撲和節點屬性隨時間變化的動態圖中的節點分類,例如,具有動態關係的社會網路,或具有不斷變化的合著者身份的文獻引用網路。然而,這些工作並不能完全解決在不同時間靈活地為圖的快照分配不同重要性的挑戰,這取決於圖的動力學,可能或多或少對標籤具有預測能力。我們提出了一種新的方法GCN-SE來應對這一挑戰,該方法受擠壓和激勵網路(SE-Net)的啟發,在不同的時間將一組可學習的注意力權重附加到圖形快照上。我們證明了GCN-SE在各種圖形資料集上的效能優於先前提出的節點分類方法。為了驗證注意權重在確定不同圖形快照重要性方面的有效性,我們將基於擾動的方法從可解釋機器學習領域應用到圖形設定中,並評估GCN-SE學習的注意權重與不同快照重要性之間隨時間的相關性。這些實驗表明,GCN-SE實際上可以識別不同快照對動態節點分類的預測能力。
摘要:Graph Convolutional Networks (GCNs) are a popular method from graph representation learning that have proved effective for tasks like node classification tasks. Although typical GCN models focus on classifying nodes within a static graph, several recent variants propose node classification in dynamic graphs whose topologies and node attributes change over time, e.g., social networks with dynamic relationships, or literature citation networks with changing co-authorships. These works, however, do not fully address the challenge of flexibly assigning different importance to snapshots of the graph at different times, which depending on the graph dynamics may have more or less predictive power on the labels. We address this challenge by proposing a new method, GCN-SE, that attaches a set of learnable attention weights to graph snapshots at different times, inspired by Squeeze and Excitation Net (SE-Net). We show that GCN-SE outperforms previously proposed node classification methods on a variety of graph datasets. To verify the effectiveness of the attention weight in determining the importance of different graph snapshots, we adapt perturbation-based methods from the field of explainable machine learning to graphical settings and evaluate the correlation between the attention weights learned by GCN-SE and the importance of different snapshots over time. These experiments demonstrate that GCN-SE can in fact identify different snapshots' predictive power for dynamic node classification.

因上求緣,果上努力~~~~ 作者:希望每天漲粉,轉載請註明原文連結:https://www.cnblogs.com/BlairGrowing/p/15401240.html