Abstract:
Graph representation learning, which learns node representations in a self-supervised manner for downstream supervised tasks like node classification, has gained wide attention in the field of graph mining. However, recent studies have shown that graph representa‐tion learning is not robust enough when facing malicious attacks on the graph structure, and its performance in node classification tasks even significantly lower than that of basic graph convolutional neural network models. This paper targets adversarial poisoning attacks and identifies that the vulnerability of graph self-supervised node classification models is influenced not only by the node representation mod‐ule(embedding side)but also by the robustness of the task-side neural network. Therefore, a bilateral robustness enhanced node classifi‐cation model(BREM)based on contrastive learning is proposed. Specifically, a graph convolution based on edge curvature is introduced on the node embedding side to correct messages to increase the likelihood of aggregation within the same category of nodes, and a local-global information contrastive approach is used to obtain robust node embeddings. Node embeddings are used to reconstruct inter-node re‐lationships and reduce the impact of attack edges on node representations. Unlike traditional methods that directly use node embeddings as input to multilayer perceptron(MLP), the task side utilizes the reconstructed structural updates of node features and the original node features to construct multi-view information for nodes, optimizing the task-side model to make outputs from different views more similar, thus enhancing robustness. Experiments on three popular benchmark datasets with different attack types such as no-target attacks, target attacks, and random attacks validate that BREM achieves better or comparable robustness compared to current strong baseline models.