Contact Us
Go Back
May 11, 2026
7 min read

McKinsey Highlighted the Risk. Most AI Decisions Still Can’t Be Proven

Digital network diagram with nodes on the left connecting to a central point, next to text: If You Can’t Trace It, It’s Not a Decision—highlighting the importance of traceable AI decisions. TigerGraph logo in the top left. Source: McKinsey & Company, Rewired (2026).

McKinsey Highlighted the Risk. Most AI Decisions Still Can’t Be Proven

There is a difference between producing an answer and being able to stand behind a decision, and that difference is becoming harder to ignore. McKinsey & Company in their The AI revolution in software development article pointed toward a future where AI systems don’t just assist, but operate, including coordinating work, making decisions, and influencing what happens next.  That shift is already underway. But it introduces a requirement most systems are not designed to meet. Because the moment a system moves from answering questions to making decisions, the standard changes. The question is no longer whether the output is plausible. It is whether the decision can be explained, traced, and defended. And for most systems today, that is where things break.

The Gap Between an Answer and a Decision

Most AI systems are still evaluated at the level of output. Was the answer correct? Did the model retrieve relevant information? Did the response look coherent? Those are reasonable questions. They are also the wrong ones. An answer can be judged in isolation. A decision cannot. A decision has to hold up beyond the moment it is produced. It has to be understood by someone else, reproduced under similar conditions, and justified in environments where the stakes are real: financial, operational, or regulatory.  That is not a property of the model. It is a property of the system. And it depends on something most systems do not preserve.

Why Explainability Falls Short

The industry has spent years trying to make AI systems more explainable, but most of that effort has focused on generating better descriptions after the fact. A model produces an output, and the system surfaces supporting documents, highlights relevant passages, or generates a narrative that explains why the answer makes sense. That can be useful, but it is not the same as being able to show how the decision was made. There is a fundamental difference between a justification and a trace. A justification tells you what the system believes matters. A trace shows you what actually happened. That difference becomes critical the moment decisions start to propagate across systems.

Where It Breaks

At small scale, weak explainability is tolerable. If a system produces an answer that feels slightly off, a person can review it, correct it, and move on. The cost is contained. But as systems begin to operate, triggering actions, updating records, escalating cases, influencing downstream decisions, the cost of not being able to trace those decisions increases rapidly. The system is no longer producing isolated outputs. It is shaping outcomes. And once outcomes depend on prior decisions, the question shifts from “is this answer correct?” to “how did we get here?” If that question cannot be answered, the system cannot be trusted at scale. As systems begin to act, this is no longer a limitation. It is a failure condition.

The Problem With Plausible Reasoning

One of the more uncomfortable realities is that AI systems can produce explanations that sound coherent without exposing the underlying reasoning that led to the decision. They can cite documents, summarize signals, and construct narratives that appear internally consistent. But those explanations are often generated in the same way the original answer was, assembled from fragments, optimized for plausibility rather than traceability.

That creates a subtle but important risk. The system appears explainable. But it is not accountable. Because it cannot reliably reproduce the path that led to the decision. The model can explain in language. The enterprise requires explanation in evidence.

Decision Quality Depends on What You Can Trace

We tend to think about AI performance in terms of accuracy, speed, and cost. Those are important, but they are incomplete. In enterprise systems, decision quality is determined by something deeper: whether the system can connect the outcome back to the relationships that made it valid. If a system flags fraud, approves credit, or escalates risk, the decision is only as strong as the path behind it. That path is what allows someone else i.e.an analyst, an auditor, a regulator, to understand not just what happened, but why it happened. Without that, the organization is left trusting a result it cannot verify. That is not a model limitation. It is a system limitation.

Why Retrieval Isn’t Enough

Most modern architectures rely on retrieval to provide context. They surface relevant information, pass it to a model, and generate an output. That works well for assembling answers. It does not work for proving decisions. Retrieval can tell you what information was used. It cannot show how that information was connected.  And decisions, real decisions are not based on isolated pieces of information. They are based on relationships across entities, events, behaviors, and time. Without a system that preserves those relationships and the paths between them, the decision cannot be reconstructed with confidence. It can only be approximated.

What a Decision Actually Requires

To make a decision traceable, the system has to preserve the structure of context before the decision is made. It needs to represent entities explicitly, maintain relationships between them, and allow those relationships to be traversed as the system moves from one state of understanding to another. Most importantly, it needs to retain the path that shaped the outcome. That path is not a byproduct. It is the decision.

The Shift That’s Coming

For years, AI systems have been evaluated on how well they generate outputs. That standard is no longer sufficient. As systems begin to operate, the requirement shifts from generation to accountability. Organizations will not just ask whether the system works. They will ask whether the system can prove what it did. Can we trace the decision? Can we reproduce the reasoning? Can we show the relationships that led to the outcome? Can we defend it under scrutiny? If the answer is no, the system may still be useful. But it will not be trusted with decisions that matter.

The Real Risk

McKinsey highlighted the operating shift. But scale without traceability introduces a different kind of risk. Not just incorrect outputs. Not just rising cost. Not just system complexity. Decisions that cannot be explained. Because once AI systems begin to act, the organization becomes responsible not just for what the system does, but for why it did it. And if that “why” cannot be inspected, the system cannot be governed. A decision you cannot trace is a decision you cannot trust and systems that cannot be trusted will not be used.

The Real Takeaway

Blog 1 – McKinsey Is Right: AI Needs Context. Almost No One Has It.

Blog 2 – McKinsey Described the Agent Factory. Most Systems Are Still Just Sequences.

This is about trust. Because in the end, the question is not whether AI can produce answers or coordinate actions. It is whether those actions can be understood, justified, and defended. And that comes down to something very simple. If you cannot trace the decision, you do not have a decision. You have an output. And that is where the line is being drawn. Systems that can act and prove why will move forward. 

Everything else will stop at the edge of production.

 

About the Author

Learn More About PartnerGraph

TigerGraph Partners with organizations that offer
complementary technology solutions and services.
Dr. Jay Yu

Dr. Jay Yu | VP of Product and Innovation

Dr. Jay Yu is the VP of Product and Innovation at TigerGraph, responsible for driving product strategy and roadmap, as well as fostering innovation in graph database engine and graph solutions. He is a proven hands-on full-stack innovator, strategic thinker, leader, and evangelist for new technology and product, with 25+ years of industry experience ranging from highly scalable distributed database engine company (Teradata), B2B e-commerce services startup, to consumer-facing financial applications company (Intuit). He received his PhD from the University of Wisconsin - Madison, where he specialized in large scale parallel database systems

Smiling man with short dark hair wearing a black collared shirt against a light gray background.

Todd Blaschka | COO

Todd Blaschka is a veteran in the enterprise software industry. He is passionate about creating entirely new segments in data, analytics and AI, with the distinction of establishing graph analytics as a Gartner Top 10 Data & Analytics trend two years in a row. By fervently focusing on critical industry and customer challenges, the companies under Todd's leadership have delivered significant quantifiable results to the largest brands in the world through channel and solution sales approach. Prior to TigerGraph, Todd led go to market and customer experience functions at Clustrix (acquired by MariaDB), Dataguise and IBM.