Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Graph representation learning is an effective tool for facilitating graph analysis with machine learning methods. Most GNNs, including Graph Convolutional Networks (GCN), Graph Recurrent Neural Networks (GRNN), and Graph Auto-Encoders (GAE), employ vectors to represent nodes in a deterministic way without exploiting the uncertainty in hidden variables. Deep generative models are combined with GAE in the Variational Graph Auto-Encoder (VGAE) framework to address this issue. While traditional VGAE-based methods can capture hidden and hierarchical dependencies in latent spaces, they are limited by the data's multimodality. Here, we propose the Gaussian Mixture Model (GMM) to model the prior distribution in VGAE. Furthermore, an adversarial regularization is incorporated into the proposed approach to ensure the fruitful impact of the latent representations on the results. We demonstrate the performance of the proposed method on clustering and link prediction tasks. Our experimental results on real datasets show remarkable performance compared to state-of-the-art methods.

Original publication

DOI

10.1016/j.neucom.2022.11.087

Type

Journal article

Journal

Neurocomputing

Publication Date

28/02/2023

Volume

523

Pages

157 - 169