Skip to content

KGLens: Towards Efficient and Effective Knowledge Probing of Large Language Models with Knowledge Graphs Apple Machine Learning Research

  • by

​[[{“value”:”This paper was accepted at the Workshop Towards Knowledgeable Language Models 2024.
Large Language Models (LLMs) might hallucinate facts, while curated Knowledge Graph (KGs) are typically factually reliable especially with domain-specific knowledge. Measuring the alignment between KGs and LLMs can effectively probe the factualness and identify the knowledge blind spots of LLMs. However, verifying the LLMs over extensive KGs can be expensive. In this paper, we present KGLens, a Thompson-sampling-inspired framework aimed at effectively and efficiently measuring the alignment between KGs and…”}]] [[{“value”:”This paper was accepted at the Workshop Towards Knowledgeable Language Models 2024.
Large Language Models (LLMs) might hallucinate facts, while curated Knowledge Graph (KGs) are typically factually reliable especially with domain-specific knowledge. Measuring the alignment between KGs and LLMs can effectively probe the factualness and identify the knowledge blind spots of LLMs. However, verifying the LLMs over extensive KGs can be expensive. In this paper, we present KGLens, a Thompson-sampling-inspired framework aimed at effectively and efficiently measuring the alignment between KGs and…”}]]  Read More  

Leave a Reply

Your email address will not be published. Required fields are marked *