MemQ: Enhancing Knowledge Graph Question Answering with Memory-Augmented Query Reconstruction
www.marktechpost.com
LLMs have shown strong performance in Knowledge Graph Question Answering (KGQA) by leveraging planning and interactive strategies to query knowledge graphs. Many existing approaches rely on SPARQL-based tools to retrieve information, allowing models to generate accurate answers. Some methods enhance LLMs reasoning abilities by constructing tool-based reasoning paths, while others employ decision-making frameworks that use environmental feedback to interact with knowledge graphs. Although these strategies have improved KGQA accuracy, they often blur the distinction between tool use and actual reasoning. This confusion reduces interpretability, diminishes readability, and increases the risk of hallucinated tool invocations, where models generate incorrect or irrelevant responses due to over-reliance on parametric knowledge.To address these limitations, researchers have explored memory-augmented techniques that provide external knowledge storage to support complex reasoning. Prior work has integrated memory modules for long-term context retention, enabling more reliable decision-making. Early KGQA methods used key-value memory and graph neural networks to infer answers, while recent LLM-based approaches leverage large-scale models for enhanced reasoning. Some strategies employ supervised fine-tuning to improve understanding, while others use discriminative techniques to mitigate hallucinations. However, existing KGQA methods still struggle to separate reasoning from tool invocation, leading to a lack of focus on logical inference.Researchers from the Harbin Institute of Technology propose Memory-augmented Query Reconstruction (MemQ), a framework that separates reasoning from tool invocation in LLM-based KGQA. MemQ establishes a structured query memory using LLM-generated descriptions of decomposed query statements, enabling independent reasoning. This approach enhances readability by generating explicit reasoning steps and retrieving relevant memory based on semantic similarity. MemQ improves interpretability and reduces hallucinated tool use by eliminating unnecessary tool reliance. Experimental results show that MemQ achieves state-of-the-art performance on WebQSP and CWQ benchmarks, demonstrating its effectiveness in enhancing LLM-based KGQA reasoning.MemQ is designed to separate reasoning from tool invocation in LLM-based KGQA through three key tasks: memory construction, knowledge reasoning, and query reconstruction. Memory construction involves storing query statements with corresponding natural language descriptions for efficient retrieval. The knowledge reasoning process generates structured multi-step reasoning plans, ensuring logical progression in answering queries. Query reconstruction then retrieves relevant query statements based on semantic similarity and assembles them into a final query. MemQ enhances reasoning by fine-tuning LLMs with explanation-statement pairs and uses an adaptive memory recall strategy, outperforming prior methods on WebQSP and CWQ benchmarks with state-of-the-art results.The experiments assess MemQs performance in knowledge graph question-answering using WebQSP and CWQ datasets. Hits@1 and F1 scores serve as evaluation metrics, with comparisons against tool-based baselines like RoG and ToG. MemQ, built on Llama2-7b, outperforms previous methods, showing improved reasoning via a memory-augmented approach. Analytical experiments highlight superior structural and edge accuracy. Ablation studies confirm MemQs effectiveness in tool utilization and reasoning stability. Additional analyses explore reasoning errors, hallucinations, data efficiency, and model universality, demonstrating its adaptability across architectures. MemQ significantly enhances structured reasoning while reducing errors in multi-step queries.In conclusion, the study introduces MemQ, a memory-augmented framework that separates LLM reasoning from tool invocation to reduce hallucinations in KGQA. MemQ improves query reconstruction and enhances reasoning clarity by incorporating a query memory module. The approach enables natural language reasoning while mitigating errors in tool usage. Experiments on WebQSP and CWQ benchmarks demonstrate that MemQ outperforms existing methods, achieving state-of-the-art results. By addressing the confusion between tool utilization and reasoning, MemQ enhances the readability and accuracy of LLM-generated responses, offering a more effective approach to KGQA.Check outthe Paper.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our80k+ ML SubReddit. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/VisualWebInstruct: A Large-Scale Multimodal Reasoning Dataset for Enhancing Vision-Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Groundlight Research Team Released an Open-Source AI Framework that Makes It Easy to Build Visual Reasoning Agents (with GRPO)Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Dynamic Tanh DyT: A Simplified Alternative to Normalization in TransformersSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Optimizing Test-Time Compute for LLMs: A Meta-Reinforcement Learning Approach with Cumulative Regret Minimization
0 Comentários
·0 Compartilhamentos
·6 Visualizações