I was invited to give a talk at Home Depot on knowledge graph construction. The deck walks through how we’ve been thinking about KGs when LLMs are in the loop: not just extracting triples, but deciding what deserves to be in the graph and whether it actually helps the application.
Roughly, it hits three themes that build on each other. First, semantic richness: graphs shouldn’t stop at entities and products if we care about behavior; we looked at modeling intentions and how they relate (causal, temporal, conceptual ties), not only isolated intent labels. Second, autonomous construction: scaling past hand-drawn schemas by extracting structure from web-scale text while inducing concepts and relations (including events, not only static entities), which is where ATLAS-style pipelines fit. Third, task-aware optimization: a graph can look fine yet hurt retrieval or QA; AutoGraph-R1 closes the loop with reinforcement learning so construction is judged by downstream utility (whether the graph carries answers well or indexes the right passages), not only local extraction scores.
Along the way I stressed ideas that keep showing up, especially conceptualization as glue, and wrapped with why structured graphs still pair naturally with LLMs (explainability, updates, compositional reasoning) even as models get stronger.
Slides: Download PDF
Thanks to everyone at Home Depot for hosting the conversation.