Contact Us
Go Back
May 5, 2026
5 min read

McKinsey Is Right: AI Needs Context. Almost No One Has It.

Dark background with TigerGraph logo, scattered blue nodes converging into an orange circle, and text Context Defines the Decision. Source: McKinsey & Company, Rewired (2026).

McKinsey Is Right: AI Needs Context. Almost No One Has It.

There is a moment in every technology cycle where the language settles before the understanding does. We are in that moment now with AI.

Everyone agrees on the direction. Even McKinsey & Company in their The AI revolution in software development article has made it explicit: the quality of an AI system is fundamentally tied to the quality of the context it operates on. That idea has moved quickly from insight to assumption, shaping strategy at the highest levels. But agreement on direction often hides a deeper problem.

We are using the same word “context” to describe fundamentally different things. And that gap between language and reality is now showing up in production systems. McKinsey pointed to the problem. They did not define the solution.

Context Became a Placeholder

In practice, “context” has become a placeholder. It absorbs meaning without requiring precision. When teams say they are improving context, what they are usually doing is expanding the surface area of information available to a model:

  • more documents indexed 
  • more embeddings generated 
  • broader retrieval windows 
  • richer prompt construction 

Each of these moves feels directionally correct. Each is measurable. Each can be presented as progress. But none of them answer the underlying question: What makes context useful to a system that is expected to produce decisions, not just outputs? Because usefulness is not a function of volume. Context is not volume. It is structure.

The Shift from Information to Meaning

There is a distinction most architectures blur. Information answers: What is there? Context answers: What does it mean in relation to everything else? A system can retrieve highly relevant information and still fail to produce a correct or consistent outcome, not because the data is wrong, but because the system has no reliable way to understand how pieces of information relate across multiple steps.

Retrieval systems are optimized to find proximity, semantic closeness, statistical similarity, and lexical overlap. They are very good at assembling fragments that look like they belong together. Retrieval finds proximity. It does not create understanding. Understanding emerges from relationships. And relationships require structure.

Where It Breaks

At small scale, this limitation is easy to miss. A model retrieves a handful of documents, produces a plausible answer, and the system appears to work. The underlying assumptions remain invisible. But as systems are pushed into real environments including multiple entities, multiple data sources, evolving relationships, real-time decisions, the cracks become structural.

You begin to see patterns:

  • answers that shift depending on what was retrieved 
  • reasoning that cannot be reproduced or audited 
  • outputs that are locally coherent but globally inconsistent 
  • cost increasing faster than accuracy 

These are not edge cases. They are signals. Most AI systems do not operate on context. They operate on fragments of information, loosely assembled and treated as if they were understanding.

What McKinsey Pointed Toward

McKinsey’s observation is directionally correct: context determines outcome quality. But improving context is not a retrieval problem. It is a representation and reasoning problemIt requires a system to do three things reliably:

  1. Represent entities and their relationships explicitly 
  2. Traverse those relationships across multiple steps 
  3. Preserve the structure of those relationships as part of the reasoning process 

Without those capabilities, context remains fragmented, no matter how much information is added.

The Difference Between Knowing and Understanding

This is where most approaches collapse into a subtle failure. They assume that assembling the right pieces of information is equivalent to understanding. It isn’t. Two facts, on their own, are just facts. Connected through the right relationship, they become an explanation. Connected across multiple steps, they become a decision. Similarity can suggest relevance. But only relationships establish meaning. That transition from fact to explanation to decision is not driven by volume. It is driven by structure.

The Cost of Getting This Wrong

When context is treated as a retrieval problem, the natural response to uncertainty is expansion. More data. More documents. More tokens. More compute. Each addition feels like a step toward completeness. In reality, it increases the number of possible interpretations the system must evaluate. The system is doing more work to resolve ambiguity that should not exist. That is why cost does not scale linearly. It compounds. This is not just a scaling issue. It is a structural inefficiency. And it is already showing up in production systems.

Context as a System, Not an Input

To move beyond this, context must be treated differently. Not as something appended to a prompt or retrieved at query time, but as a first-class system layer that defines how information is organized and traversed. That layer must: make relationships explicit, enable those relationships to be navigated dynamically, and preserve the path taken so outcomes can be understood and not just generated  At that point, context stops being an approximation. It becomes the foundation for deterministic reasoning in systems that are otherwise probabilistic.

What Changes When Context Is Real

When context is structured and connected: the system evaluates fewer possibilities, not more, reasoning paths become traceable, outputs become consistent across similar conditions, and cost aligns with relevance instead of volume. The system does less work and produces better outcomes. This is not an optimization. It is a fundamentally different operating model.

The Real Takeaway

McKinsey is right. Context determines the outcome. But the real insight is not that context matters. It is that most systems today do not have it. They have proximity. They have fragments.
They have approximations. They do not have structured understanding. Context is not what you retrieve. It is the structure that determines how information connects and how those connections are used to reach a decision. Until that distinction is addressed, systems will continue to scale in cost while failing in consistency. And that is not a future risk. It is already happening.

 

About the Author

Learn More About PartnerGraph

TigerGraph Partners with organizations that offer
complementary technology solutions and services.
Dr. Jay Yu

Dr. Jay Yu | VP of Product and Innovation

Dr. Jay Yu is the VP of Product and Innovation at TigerGraph, responsible for driving product strategy and roadmap, as well as fostering innovation in graph database engine and graph solutions. He is a proven hands-on full-stack innovator, strategic thinker, leader, and evangelist for new technology and product, with 25+ years of industry experience ranging from highly scalable distributed database engine company (Teradata), B2B e-commerce services startup, to consumer-facing financial applications company (Intuit). He received his PhD from the University of Wisconsin - Madison, where he specialized in large scale parallel database systems

Smiling man with short dark hair wearing a black collared shirt against a light gray background.

Todd Blaschka | COO

Todd Blaschka is a veteran in the enterprise software industry. He is passionate about creating entirely new segments in data, analytics and AI, with the distinction of establishing graph analytics as a Gartner Top 10 Data & Analytics trend two years in a row. By fervently focusing on critical industry and customer challenges, the companies under Todd's leadership have delivered significant quantifiable results to the largest brands in the world through channel and solution sales approach. Prior to TigerGraph, Todd led go to market and customer experience functions at Clustrix (acquired by MariaDB), Dataguise and IBM.