Skip to content

Meet ULTRA: A Pre-Trained Foundation Model for Knowledge Graph Reasoning that Works on Any Graph and Outperforms Supervised SOTA Models on 50+ Graphs Adnan Hassan Artificial Intelligence Category – MarkTechPost

  • by

ULTRA is a model designed to learn universal and transferable graph representations for knowledge graphs (KGs). ULTRA creates relational illustrations by conditioning them on interactions, enabling it to generalise to any KG with different entity and relation vocabularies. A pre-trained ULTRA model exhibits impressive zero-shot inductive inference on new graphs in link prediction experiments, often outperforming specialised baselines. 

Researchers from numerous institutes have come together to address the challenge of creating foundational models for KGs capable of universal inference. It presents ULTRA, a model for learning versatile graph representations without relying on textual information. Their study distinguishes ULTRA from text-based approaches and discusses dataset types used in experiments, including transductive and inductive datasets with new entities. Existing inductive methods for link prediction in KGs are reviewed, emphasising their limitations.

Their method discusses the challenge of applying the pre-training and fine-tuning paradigm, successful in domains like language and vision, to KGs due to their varying entity and relation vocabularies. ULTRA is an approach for learning universal graph representations that enables zero-shot transfer to new KGs with different relations and structures. ULTRA leverages relation interactions, facilitating generalisation across KGs of various sizes and relational vocabularies, aiming to enable effective pre-training and fine-tuning for KG reasoning.

ULTRA is introduced to learn universal graph representations, enabling inference on graphs with varying entity and relation vocabularies. It employs a three-step algorithm to lift the graph, obtain relation representations conditioned on queries, and predict links. ULTRA’s performance is compared to specialised baselines on 57 KGs, showing strong zero-shot inductive inference. Fine-tuning enhances performance, making it competitive or superior to baseline models trained on specific graphs.

The proposed method for universal graph representations, ULTRA, performs exceptionally well in zero-shot inference, often surpassing specific graph-trained baselines. The performance of ULTRA can be further enhanced by fine-tuning, which effectively reduces the gap between pre-training and baseline results. ULTRA exhibits remarkable improvements on smaller inductive graphs, achieving almost three times better performance on FB-25 and FB-50. The evaluation metrics include MRR and H10, reported for full entity sets. 

In conclusion, ULTRA offers universal and transferable graph representations, excelling in training and inference on diverse multi-relational graphs without input features. It outperforms tailored supervised baselines on a wide range of graphs, even in zero-shot scenarios, by an average of 15, with further improvement through fine-tuning. ULTRA’s ability to generalise to new, unseen graphs with different relational structures makes it a promising choice for inductive and transferable knowledge graph reasoning. Its evaluation of 57 KGs consistently shows superior performance compared to specific graph-trained baselines.

Future work suggests exploring additional strategies for capturing relation-to-relation interactions. The need for comprehensive evaluation metrics beyond Hits10, with 50 random negatives, is emphasised. The current research encourages investigating transfer learning’s potential benefits for KG representation learning, which has yet to be fully explored. It also recommends research into inductive learning methods that generalise to KGs with varying relation sets.

Check out the Paper and Github. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 32k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on Telegram and WhatsApp.

The post Meet ULTRA: A Pre-Trained Foundation Model for Knowledge Graph Reasoning that Works on Any Graph and Outperforms Supervised SOTA Models on 50+ Graphs appeared first on MarkTechPost.

 ULTRA is a model designed to learn universal and transferable graph representations for knowledge graphs (KGs). ULTRA creates relational illustrations by conditioning them on interactions, enabling it to generalise to any KG with different entity and relation vocabularies. A pre-trained ULTRA model exhibits impressive zero-shot inductive inference on new graphs in link prediction experiments, often
The post Meet ULTRA: A Pre-Trained Foundation Model for Knowledge Graph Reasoning that Works on Any Graph and Outperforms Supervised SOTA Models on 50+ Graphs appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Language Model, Large Language Model, Machine Learning, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *