WWW.MARKTECHPOST.COM
CLDG: A Simple Machine Learning Framework that Sets New Benchmarks in Unsupervised Learning on Dynamic Graphs
Graph Neural Networks have emerged as a transformative force in many real-life applications, from corporate finance risk management to local traffic prediction. Thereby, there is no gainsaying that much research has been centered around GNNs for a long time. A significant limitation of the current study, however, is its data dependencywith a focus on supervised and semi-supervised paradigms, the investigations potential depends on the availability of ground truth, a requirement often unmet. Another reason for the sparsity of actual labels is the inherent nature of GNNs themselves. Since a graph is an abstraction of the real world, it is not as straightforward as video, image, or text, requiring expert knowledge and experience.With the prevailing challenges and increasing expenses to solve supervised graph paradigms, researchers have begun a pivot toward unsupervised contrastive learning. It works based on mutual information between different augmented graph views generated by perturbing its nodes, edges, and features. Although this approach is promising and eliminates the necessity of labels, it is not always possible to confirm if the labels and semantics remain unchanged post-augmentation, significantly undermining the graphs performance. To understand the detrimental effects of augmentation, lets take the example of a node. One could add or delete a node in the existing graph, which either adds noise or removes information, both detrimental. Therefore, existing static graph contrastive learning methods may not be optimal for dynamic graphs. This article discusses the latest research that claims to generalize contrastive learning to dynamic graphs.Researchers from Xian Jiaotong University, China, presented CLDG, an efficient unsupervised Contrastive Learning framework on the Dynamic Graph, which performs representation learning on discrete and continuous-time dynamic graphs. It solves the dilemma of selecting periods as contrastive pairs while applying contrastive learning to dynamic graphs. CLDG is a light and highly scalable algorithm, credit due to its simplicity. Users get lower time and space complexity and the opportunity to choose from a pool of encoders.The proposed framework consists of five major components:timespan view sampling layerbase encoderreadout functionprojection headcontrastive loss functionThe research team first generated multiple views from continuous dynamic graphs via a timespan view sampling method. Here, the view sampling layer extracts the temporally persistent signals. They then learned the feature representations of nodes and neighborhoods through a weight-shared encoder, a readout function, and a weight-shared projection head. The authors used statistical-based methods such as average, maximum, and summation for the readout function layer.An important insight to discuss at this point is temporal translation invariance. Under this, it is observed that regardless of the encoder used for training, the prediction labels of the same node tend to be similar in different time spans. The paper presented two separate local-level and global-level contrastive losses to maintain temporal translation invariance at both levels. In local-level temporal translation invariance, semantics were treated as positive pairs for one node across time spans, which pulled the same node representations closer and different nodes apart. Conversely, loss for global invariance pulled different nodes together and the same representation away. Following the above, the authors designed four different timespan view sampling strategies to explore the optimal view interval distance selection for contrastive pairs. These strategies differed in the physical and temporal overlap rate and thereby had different semantic contexts.The paper validated CLDG on seven real-world dynamic graph datasets and across twelve baselines. The proposed method outperformed eight unsupervised state-of-the-art baselines and was on par with the remaining four semi-supervised methods. Furthermore, compared to existing graph methods, CLDG reduced model parameters by an average of 2000 times and the training time by 130.Conclusion: CLDG is a practical, lightweight framework that generalizes contrastive learning to dynamic graphs. It uses additional temporal information and achieves state-of-the-art performance in unsupervised dynamic graph techniques while competing with semi-supervised methods.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Adeeba Alam Ansari+ postsAdeeba Alam Ansari is currently pursuing her Dual Degree at the Indian Institute of Technology (IIT) Kharagpur, earning a B.Tech in Industrial Engineering and an M.Tech in Financial Engineering. With a keen interest in machine learning and artificial intelligence, she is an avid reader and an inquisitive individual. Adeeba firmly believes in the power of technology to empower society and promote welfare through innovative solutions driven by empathy and a deep understanding of real-world challenges. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
0 Comments
0 Shares
20 Views