Prompt Engineering Was Yesterday. Context Engineering Is Today. Here's What Comes Next.

Prompt Engineering Was Yesterday. Context Engineering Is Today. Here's What Comes Next.

March 18, 2026

8 mins

AI Maturity

Prompt Engineering Was Yesterday. Context Engineering Is Today. Here's What Comes Next.

March 6, 2026 · 8 min read

Most organizations are stuck at Level 1 of a four-level AI maturity model. The gap between the 88% adopting AI and the 6% getting value is not the technology. It's which levels you've built.

The problem is well documented: 88% of organizations have adopted AI, but only 6% qualify as high performers generating meaningful financial impact.1 Boston Consulting Group found that 70% of the barriers to AI value are people and process related. Only 20% are technology problems.

If the problem is not the technology, what is it?

A maturity model is emerging that explains both the gap and the path forward. It starts with the skill everyone has and builds through three disciplines almost nobody is investing in. The organizations that understand these four levels will close the gap. The rest will keep spending on AI tools and wondering why the results never arrive.

Level 1: Prompt Craft Is Table Stakes

Prompt craft is the skill most people think of when they hear "AI skills." Writing clear instructions in a chat window. Providing examples, setting guardrails, specifying output format.

This was the breakthrough skill of 2023 and 2024. Entire YouTube channels and LinkedIn courses were built around it. And it mattered, because a well-crafted prompt could turn a mediocre AI output into a useful one.

It is now table stakes. Comparable to knowing how to send an email in 1998. Every organization has employees who can write a decent prompt. If that is where your organization's AI capability stops, you are part of the 88%.

Here is the illustration that makes this concrete. Two people sit down on the same Tuesday morning with the same AI model and the same subscription. Person A types a request for a presentation deck, gets something 80% right, spends 40 minutes cleaning it up, and saves a couple of hours. Person B writes a structured specification in 11 minutes, hands it off as an autonomous agent task, and completes five similar tasks before lunch. Same model. Same Tuesday. One person saves an afternoon. The other does a week's work in a morning.

The gap between them is not the prompt. It is everything above it.

Level 2: Context Engineering Is Where the Action Is

Context engineering is the discipline of curating and maintaining the optimal information state for an AI system. Not crafting a single instruction, but structuring the entire knowledge environment the AI operates within.

Anthropic published a foundational piece on this in September 2025, defining context engineering as "the set of strategies for curating and maintaining the optimal set of tokens (information) during LLM inference." Harrison Chase, CEO of LangChain, put it more bluntly on a Sequoia Capital podcast: "Everything's context engineering."

Consider the math. Your prompt might be 200 tokens. The context window it lands in might hold a million tokens. That means your carefully crafted prompt represents 0.02% of what the model sees. The other 99.98% is context engineering.

This is the discipline that produces structured knowledge bases and agent specifications. It builds the documentation layer that tells AI systems what the organization knows. People who are 10x more effective with AI are not writing 10x better prompts. They are building 10x better context infrastructure.

Shopify CEO Tobi Lutke reflected on this during an Acquired podcast appearance: "I think so much of what people describe in companies as politics is actually bad context engineering." His observation was that being forced to provide AI with complete context made him a better communicator as a CEO. Context engineering is not just an AI skill. It is a leadership skill.

This is where the most sophisticated organizations currently operate. It is necessary. It is not sufficient.

Level 3: Intent Engineering Is What's Missing

Context engineering tells agents what to know. Intent engineering tells agents what to want.

Intent engineering is the practice of encoding organizational purpose, goals, values, trade-off hierarchies, and decision boundaries into infrastructure that AI agents can act against. Not as prose in a system prompt, but as structured, actionable parameters that shape how agents make autonomous decisions.

The distinction matters because context without intent produces AI that is technically brilliant and strategically misaligned.

Klarna learned this the hard way. In February 2024, the company deployed an AI customer service assistant that handled 2.3 million conversations in its first month, operating across 23 markets in 35 languages. Resolution times dropped from 11 minutes to under 2 minutes. The AI was doing the equivalent work of 700 full-time agents. Klarna's SEC filing confirmed $39 million in cost savings for 2024.

Then something shifted. By mid-2025, CEO Sebastian Siemiatkowski acknowledged quality concerns in a Bloomberg interview, stating that cost had been "a too predominant evaluation factor" in organizing customer service, resulting in "lower quality." The company began rehiring human agents. The full picture is more nuanced than a simple AI failure story. Klarna maintained its AI-first strategy and continued scaling the program. But the core lesson stands: the AI agent resolved tickets faster than any human could. Speed was the metric it was given. The organization's actual intent was building lasting customer relationships and lifetime value. A human agent with five years of experience knows when to bend a policy, when to spend an extra three minutes because the customer's tone signals churn. That judgment was never encoded.

This is the pattern across enterprise AI. The AI worked as designed. It was designed for the wrong objective.

Consider the broader data. Deloitte's 2026 State of AI in the Enterprise survey (3,235 leaders across 24 countries) found that 84% of companies have not redesigned jobs around AI capabilities. Only 21% have a mature model for agent governance. Those numbers describe organizations that have invested in AI capability but never encoded organizational intent.

The historical parallel is instructive. OKRs were the management innovation that let Intel align thousands of employees to shared objectives in the 1970s. Intent engineering is the management innovation that lets organizations align thousands of AI agents to those same objectives in 2026. The difference: we do not have 20 years to wait.

Level 4: Specification Engineering Is Where This Is Heading

Specification engineering is the discipline of writing documents across an organization that autonomous agents can execute against over extended time horizons without human intervention. It means looking at your entire organizational document corpus (SOPs, policies, decision frameworks, runbooks) and asking: could an AI agent operate against this?

This matters because agent capabilities are scaling rapidly. Autonomous AI sessions that ran for minutes in 2024 now run for hours or days. The trajectory points toward agents that operate independently for weeks. Everything an organization relied on from human operators (catching mistakes in real time, providing missing context, course-correcting drift) must now be encoded before the agent starts.

The discipline rests on five primitives. State problems so they are self-contained. Define acceptance criteria so agents know when "done" means done. Build constraint architectures (what the agent must do, must not do, prefers, and when to escalate). Decompose work into independently executable components. Design evaluation criteria that let you measure output quality systematically.

The planner-worker architecture has become the dominant pattern in production agent deployments: a capable model plans and decomposes the work, then cheaper, faster models execute it. The specification phase determines the quality ceiling. Execution without specification produces broken work that requires extensive human rework.

Lutke's Shopify experience validates this at the CEO level. He maintains a folder of prompts with expected results and runs them against every new model release. That practice is specification engineering in its simplest form. And he reports that the discipline of providing complete context improved his leadership: tighter emails, better memos, stronger decision frameworks.

Why Most Organizations Can't Get Past Level 1

Three barriers keep organizations stuck at the bottom of the maturity model.

The discipline is new. Intent and specification engineering did not exist as named practices until recently. Before AI agents could operate autonomously for hours or days, organizations never needed this infrastructure. Humans were the intent layer.

The two-cultures problem. The people who understand organizational strategy (executives) are not the people building AI agents (engineers). And the engineers building agents rarely understand organizational strategy. When AI investment is treated as a technology challenge for the CIO rather than a business issue requiring leadership across the organization, the result is an intent gap. The CIO can build infrastructure, but intent comes from the entire leadership team.

Tacit knowledge resists documentation. Most organizations have never had to make their operating intent explicit and structured. Goals live in slide decks and OKR documents half-read and cited once yearly. The real intelligence lives in the tacit knowledge of experienced employees who know what to do in ambiguous situations even though they have never been told. Making that knowledge machine-readable is a different kind of work than most organizations have ever done.

The Race Changed. Most Companies Didn't Notice.

Between 2023 and 2025, the question was: who has the best AI model? The biggest context window? The highest benchmark scores? That race made sense when models were the bottleneck.

Models are no longer the bottleneck. The frontier models available today are all extraordinarily capable. The differences between them matter far less than the differences between organizations that give them clear, structured, goal-aligned intent and organizations that don't.

The company with a solid model and extraordinary organizational intent infrastructure will outperform the company with a frontier model and fragmented, inaccessible organizational knowledge. That is the competitive reality of 2026.

The 88/6 gap is organizational, not technological. The maturity model puts structure around that insight. Level 1 is universal. Levels 2 through 4 are where value lives. And almost nobody has built them.

Open Questions

1. Who owns intent engineering in the org chart? The CEO sets organizational purpose. The CIO builds technology infrastructure. Intent engineering sits in the gap between them. No established role or reporting line exists for this work. Until someone owns it, it will not get done.

2. Can established companies catch up? A one-person business can make its knowledge base agent-readable in a weekend. An enterprise organization with decades of fragmented documentation faces a fundamentally different challenge. Whether that gap is a six-month project or a multi-year infrastructure build is an open question, and the answer probably varies by industry and organizational complexity.

3. How do you measure progress across the four levels? Organizations can measure AI adoption (Level 1 metrics like user counts and query volume). There is no established framework for measuring context quality, intent alignment, or specification readiness. Without measurement, investment decisions are guesswork.

Building Beyond Level 1: Four Steps for the Next 90 Days

1. Assess where your organization sits on the four levels. Most companies will find individual prompt users (Level 1), fragmented context infrastructure (early Level 2), and nothing at Levels 3 or 4. Knowing the gap is the prerequisite for closing it.

2. Start with context infrastructure. You cannot skip levels. Build a unified context layer before attempting intent or specification work. This means centralized knowledge bases, consistent data access, and standardized AI tool environments. Deloitte found that 84% of companies have not even redesigned jobs around AI.9 Start there.

3. Run the Klarna test on every deployed AI system. For each AI tool or agent operating in your organization, ask: "If this system optimized perfectly for its stated objective, would that serve our actual organizational goals?" If the answer is unclear, you have an intent engineering problem.

4. Make one critical document agent-readable. Pick a single high-value operational document (an SOP, a decision framework, a policy) and rewrite it as a specification an AI agent could execute. Include acceptance criteria, constraints, and decision boundaries. This exercise will reveal how much of your organization's operating intelligence lives in people's heads rather than your documentation.

Nova Group helps mid-market leadership teams build the infrastructure above Level 1. As a consulting firm providing Fractional CIO, Advisory, and AI Strategy services, Nova Group works with organizations to build the context, intent, and specification layers their AI investments need to deliver value. The four-level maturity model maps directly to what a Fractional CIO actually does: translate executive intent into technology execution.


Sources

  1. McKinsey & Company, "The State of AI: How Organizations Are Rewiring to Capture Value" (2025). mckinsey.com

  2. Boston Consulting Group, "Where's the Value in AI?" (2024). bcg.com

  3. Anthropic, "Effective Context Engineering for AI Agents" (September 2025). anthropic.com

  4. Harrison Chase on Sequoia Capital Training Data podcast, "Context Engineering Our Way to Long-Horizon Agents" (January 2026). sequoiacap.com

  5. Tobi Lutke on Acquired podcast, "How to Live in Everyone Else's Future with Shopify CEO Tobi Lutke." acquired.fm

  6. Klarna, "Klarna AI Assistant Handles Two-Thirds of Customer Service Chats in Its First Month" (February 27, 2024). prnewswire.com

  7. Klarna F-1 SEC Filing (March 2025). Referenced in CX Network.

  8. Bloomberg, "Klarna Turns From AI to Real Person Customer Service" (May 8, 2025). bloomberg.com

  9. Deloitte, "The State of AI in the Enterprise: From Ambition to Activation" (2026). deloitte.com

Contact us

Reach out today.

We'll schedule a discovery call to understand your challenges and how we can help.






Contact us

Reach out today.

We'll schedule a discovery call to understand your challenges and how we can help.






Contact us

Reach out today.

We'll schedule a discovery call to understand your challenges and how we can help.






Nova Group is a technology advisory firm providing Fractional CIO, Interim CIO, and AI strategy consulting services. The firm helps SMB and mid-market organizations align technology strategy with business outcomes, modernize IT leadership, and implement enterprise AI initiatives.

© 2026 Nova Group. All right Reserved

We care about protecting your data. Read more in our Privacy Policy.

Nova Group is a technology advisory firm providing Fractional CIO, Interim CIO, and AI strategy consulting services. The firm helps SMB and mid-market organizations align technology strategy with business outcomes, modernize IT leadership, and implement enterprise AI initiatives.

© 2026 Nova Group. All right Reserved

We care about protecting your data. Read more in our Privacy Policy.

Nova Group is a technology advisory firm providing Fractional CIO, Interim CIO, and AI strategy consulting services. The firm helps SMB and mid-market organizations align technology strategy with business outcomes, modernize IT leadership, and implement enterprise AI initiatives.

© 2026 Nova Group. All right Reserved

We care about protecting your data. Read more in our Privacy Policy.