Knowledge Graphs are the control plane for Agentic AI
Why agentic systems fail without deterministic semantic mapping
đ THE POINT IS: AI agents can only be trusted in operations when their answers are correct, consistent, and explainable. SOPs teach agents how work should happen; knowledge graphs determine what is true, connected, and allowed. Together, they form the deterministic control plane that keeps probabilistic agents from freestyling your business.
Click the âHeartâ icon and/or Restack / Share to show me that you vibe with this article! Thanks!
Generative and Agentic AI are confidently wrong without bounds and direction
We've all been there: you ask ChatGPT for the answer to some dilemma and it gives you a very confident and very wrong answer. You point out what's wrong, and it cheerfully admits that it was wrong and you were right and here are five reasons why it was wrong.
AI tools without bounds and a deterministic control plane can get you pretty far, but so will driving 100 MPH in a car with no seatbelts. Especially when we're industrializing AI agents for business usage, we need to remove opportunities for wrong answers, inconsistent ones, and results that are hard to explain.
Itâs not about model tweaks, but about AI âinfrastructureâ that companies need to invest in alongside their key use cases. This isn't new; it's already codified in risk frameworks:
âThe characteristics of trustworthy AI are integrated into organizational policies, processes, and procedures.â
â NIST AI Risk Management Framework (Pg 5, âGovernâ § 1.2)
Remember those SOPs? They contain the âhowâ, but lack the business context
In my last article, I discussed how critical it is to have documentation for your processes, but also to do classic process engineering to make sure they're optimized for your AI agents. This work is an essential part of the agent-building process. They will help move the needle on making sure your agents' steps and outputs are done correctly.
âWithout a formal process model, it is impossible to reason about the correctness of process executions.â
â Wil M. P. van der Aalst, Process Mining: Data Science in Action
While getting the steps down should be done first, telling the agent how to do a process doesnât guarantee that it will get the right answer or that it'll do that process the same way every time.
Iâve created a couple of ChatGPT CustomGPTs and have given them specific instructions on how to perform activities. I find that they'll adhere to my instructions several times, but eventually they'll drift and add a different format or flair to the activity. That drift is harmless in personal workflows, but it's not harmless in regulated ones.
Knowledge Graphs are the control plane for Agents
Enter Knowledge Graphs (KGs). When your business partners build a KG for their department or processes, they're creating a model of the entities, relationships, and authoritative sources that are critical for successfully answering questions correctly, consistently, and in an explainable manner. They are the deterministic control plane for your agents that bound, guide, and give them context about your business that they need.
Here's what that looks like in practice:
Correct: answers are grounded in authoritative sources and versions (tables, databases, etc.)
Consistent: the same path to the same answers every time, regardless of how you ask the question
Explainable: if asked, humans can look at the reasoning path along the KG chain and explain to an auditor why certain sources are being used
What's important to note is that the SOP tells the agent how to perform a process. The KG then tells it where to go to perform the various steps. The map gives everyone confidence that it's bringing back the right answers based on trusted data repositories.
The effort brings multiples of benefits back
It takes a village of business partners to help build the nodes and relationships between the data. I'm not trying to hide that; while Tech plays a role, the business does the important work. The good news, though, is that the benefits multiply every time you connect KGs together to create a larger map that links the enterprise together.
With a unified KG, you can answer questions and perform actions related to:
Customer â Account â Product â Policy â SOP â Allowed Action
Each node above is related to the next and questions related to Policies can be tied back to Accounts and Customers. Maintenance can be performed on an Account by identifying the Policy number or Customer name. Customers can be authenticated using Account and Policy numbers (two-step) when requesting something like a last name change. And all of those operations are connected to their proper SOPs.
All of a sudden, when the nodes are connected, exponential numbers of questions can be answered that weren't possible before by getting more and more specific about what's happening in the world. You can build KGs iteratively over time. You don't have to do it all at once! This is important: don't get overwhelmed and table the whole project because it seems too daunting.
As you connect up different pieces (e.g. a KG built by the Claims team and a KG created by the Customer team), the possibilities explode. Suddenly, proactive agents, dashboards, and actions can be set to alert all customers in Northeastern Texas that have an Auto policy when a hail event is detected 30 miles away. And the list goes on the larger the KG eventually becomes.
The best part is, when an auditor or state regulator shows up to ask about why those alerts were sent, the business and Tech teams can open the graph, explain each node, highlight relationships between the nodes, and when viewed side-by-side with the agent's thought chain, the reason becomes easily clear. This is where many AI demos without KGs fall apart at a critical moment.
5 things you can try today to stress-test your agents, chatbots, or everyday AI tools
You can put these concepts under the microscope at your company by trying these five things:
Run the same question three times
Ask your agent a real customer-service question (e.g., eligibility, coverage, or next-best action) three separate times, ideally across channels or sessions.Compare the answers side by side
Do you receive the same conclusion each time, or does wording, logic, or outcome drift?Demand the policy or source of truth
Require the agent to identify which policy, rule set, or authoritative document version was used to generate the answer.Ask for the reasoning path
Request a clear explanation of how the answer was derived: which entities were involved, which relationships applied, and why alternatives were excluded.Stress-test cross-domain consistency
Ask the same question from the perspective of customer service, underwriting, or claims. If answers diverge, your enterprise meaning is fragmented.
Look out for these gotchas:
If the answers come from the wrong source â you lack correctness
If answers change when asked differently â you lack consistency
If reasoning doesnât show you enough detail â you lack explainability
Next, try to ask your agent to answer questions that you know require cross-departmental data. It may be very telling to see how much it hallucinates or assumes relationships that arenât there. If you have to create huge prompts every time you ask a question to get closer to the answer you need, thatâs where knowledge graphs make a huge difference.
At this point, the pattern should be pretty clear
In order to get correct, consistent, and explainable answers from AI, you need:
SOPs that show how work is supposed to happen
Knowledge graphs that capture what is true, what is connected, and what is allowed
Agents that do the work, using judgment, inside those guardrails
This is not a philosophical debate about AI: itâs an architecture choice.
If you want to maximize investments in AI and reduce unnecessary human-in-the-loop effort, answers need to be correct, consistent, and explainable every time. This is especially true when customers, auditors, or regulators ask questions.




Really sharp take on the KG+SOP architecture for agentic systems. The distinction between "how to do work" and "what is true" gets lost alot when teams rush into production, and then they wonder why audit trails look sketchy or answers drift across similar prompts. I ran into this on a project where we had solid process docs but zero schema grounding, agents would pull from whatever datasource felt closest and auditors basiclly tore us apart.