[Research Update] Unifying Logic Reasoning in Knowledge Graphs with Diffusion Models
I am thrilled to share our latest work, DARK, which presents a unified framework for Deductive and Abductive Reasoning in Knowledge Graphs using Masked Diffusion Models. Congradulation to Yisen Gao
🔍 The Problem: Traditionally, deductive reasoning (finding answers to complex queries) and abductive reasoning (generating hypotheses to explain observations) are treated as isolated tasks. However, they are inherently complementary: deduction validates hypotheses, while abduction uncovers logical patterns.
💡 Our Solution - DARK: We treat logical queries and their conclusions as a unified sequence, leveraging the bidirectional generation capability of Masked Diffusion Models.
- Self-Reflective Denoising: Mirrors human reasoning by generating candidate hypotheses and using deductive verification to select the best one at each step.
- Logic-Exploration RL: A reinforcement learning approach that masks both queries and conclusions to explore novel logical compositions.
📈 Impact: Our extensive experiments show that DARK achieves State-of-the-Art performance on both deductive and abductive tasks across benchmark datasets (FB15k-237, WN18RR, DBpedia50), significantly outperforming existing methods and general LLMs like GPT-4o in complex logical scenarios.
This work bridges the gap between generative diffusion models and structured logical reasoning.
🔗 Read the full paper: https://arxiv.org/abs/2510.11462 🔗 Use our implementation: https://github.com/HKUST-KnowComp/DARK
#ArtificialIntelligence #KnowledgeGraph #Reasoning #DiffusionModels #DeepLearning #AcademicResearch
DARK:当扩散模型遇上知识图谱推理,演绎与溯因终于统一了
大家好,很高兴分享我们最新的工作 DARK (Unifying Deductive and Abductive Reasoning in Knowledge Graphs with Masked Diffusion Model)。这篇论文已被 WWW-2026 接收。向一作同学高意森表示祝贺。
核心痛点: 在知识图谱(KG)推理中,演绎推理(Deductive,给查询找答案)和溯因推理(Abductive,给结果找解释/假设)通常是被割裂研究的。但其实它们互为表里:溯因生成的假设需要演绎来验证,演绎推理也能利用溯因发现深层逻辑。现有的方法(包括很多 LLM)往往难以兼顾两者,或者在处理复杂逻辑(如否定、析取)时表现不佳。
我们的方法 (DARK): 我们提出了一种基于 Masked Diffusion Model 的统一框架。核心洞察是:将逻辑查询(Query)和结论(Conclusion)视为一个联合序列。
- 双向建模: 利用扩散模型的双向生成能力,既能从 Query 生成 Conclusion(演绎),也能从 Conclusion 反推 Query(溯因)。
- 自反思去噪 (Self-reflective Denoising): 在溯因推理过程中,模型会生成多个候选假设,并利用“演绎推理”来验证哪个假设最符合观测结果,以此指导下一步去噪。这就像人类思考时不断“假设-验证”的过程。
- 逻辑探索强化学习 (Logic-exploration RL): 我们设计了一种 RL 策略,同时 Mask 掉查询和结论的一部分,鼓励模型探索更多样、合理的逻辑组合,避免陷入局部最优。
实验结果: 在 FB15k-237, WN18RR, DBpedia50 三大基准数据集上,DARK 在演绎和溯因任务上均取得了 SOTA 的效果。特别是在溯因推理任务中,相比于 AbductiveKGR 和 GPT-4o/DeepSeek-V3 等大模型,我们在处理复杂逻辑结构(如包含否定操作)时优势非常明显。
欢迎大家阅读论文并交流! 📄 论文链接: https://arxiv.org/abs/2510.11462 💻 代码开源: https://github.com/HKUST-KnowComp/DARK
#知识图谱 #逻辑推理 #扩散模型 #人工智能 #深度学习