TL;DR

The Bottom Line: The energy industry’s current approach to AI — deploying a single general-purpose agent and hoping it handles everything from drilling optimization to regulatory compliance — doesn’t scale and can’t be trusted with safety-critical decisions. Hierarchical multi-agent systems, where a central orchestrator coordinates domain-specific agents operating within strict safety boundaries, offer a fundamentally different architecture. One that mirrors how high-performing engineering teams actually work.

Key Insight: The question isn’t whether AI can make operational decisions in energy — it’s whether we can design systems where the AI knows exactly when it shouldn’t.


David Moore is an AI & Digital Transformation leader with 20+ years of global experience in energy operations. He holds a Ph.D. in Mechanical Engineering and speaks internationally on AI architecture and deployment strategy.

Every energy company thinks they have an AI strategy. Most of them have a chatbot connected to some well data.

There’s a pattern I keep seeing across the energy sector. A company invests in an AI initiative, deploys a capable large language model, connects it to some operational data, and declares they’ve taken a step toward autonomous operations. For the first few weeks, it’s impressive. It can summarize reports, answer questions about well data, maybe even generate a passable daily drilling report.

Then someone asks it to diagnose a stuck pipe event. Or to recommend completion parameters for a formation it has limited offset data on. Or to make a call on barrier integrity.

And the whole thing falls apart. Not because the AI isn’t smart enough — but because a single agent trying to be everything to everyone is architecturally incapable of operating safely in a domain where the consequences of being wrong can be measured in real terms. A single stuck pipe event can cost $1-5 million. A well control incident can run into the tens of millions before you account for environmental remediation, regulatory fallout, and reputational damage. These aren’t acceptable margins for hallucination.

The Single-Agent AI Problem in Energy Operations

Most enterprise AI deployments today follow a straightforward pattern: one model, one system prompt, one set of tools, one context window. This works remarkably well for knowledge work — drafting documents, analyzing data, answering questions. But upstream energy operations aren’t knowledge work in the traditional sense. They’re a continuous, high-stakes decision environment where:

Single-Agent AI: Key Takeaways

Hierarchical Multi-Agent Systems: A Better Architecture

The alternative isn’t to build a bigger, smarter single agent. It’s to design a system that mirrors how effective engineering teams actually operate.

A hierarchical multi-agent system is an AI architecture in which a central orchestrator coordinates multiple domain-specific AI agents, each operating within defined safety boundaries and authority levels. Rather than relying on a single general-purpose model, this approach mirrors how engineering teams work: specialists handle domain-specific problems while a coordinator routes tasks, synthesizes inputs, and escalates decisions that exceed any individual agent’s authority.

Think about how a drilling campaign works in practice. You don’t have one person who plans the well, monitors real-time operations, diagnoses trouble events, writes reports, and manages regulatory compliance. You have specialists — drilling engineers, completions engineers, well integrity engineers — coordinated by a team lead who routes problems to the right person, synthesizes their input, and escalates decisions that exceed any individual’s authority.

A hierarchical multi-agent system follows the same principle:

The Orchestrator sits at the top. It doesn’t try to answer questions directly. Instead, it understands what kind of problem it’s looking at and routes it to the right domain agent. If a problem spans multiple domains, it coordinates the response — this is AI orchestration in its truest sense. If a situation exceeds what any agent should handle autonomously, it escalates to a human. Think of the Orchestrator as the team lead — its job is coordination, not execution.

Domain Agents are specialists. A drilling agent understands wellbore mechanics, ROP optimization, and stuck pipe diagnosis. A completions agent knows perforation design, artificial lift selection, and frac design review. A well integrity agent tracks barrier status, casing wear, and abandonment planning. Each operates within a clearly defined scope, with explicit boundaries on what falls outside its authority.

Procedures are the structured workflows each agent executes. Defined sequences of steps with specific inputs, outputs, and validation criteria. An offset well analysis, for example, follows a systematic process: spatial query for analog wells, parallel data retrieval, metric extraction, pattern analysis, and report generation. The procedure defines what data sources to consult, what calculations to run, and what the output schema looks like.

This isn’t just organizational tidiness. It’s a fundamentally different trust model. When a single agent gives you a recommendation, you’re trusting one black box. When a hierarchical system gives you a recommendation, you can trace exactly which agent produced it, which procedure it followed, what data it consulted, and where in the execution chain human judgment is required.

Multi-Agent Architecture: Key Takeaways

The Execution Hierarchy: Not Everything Needs an LLM

Here’s where most agentic AI architectures get it wrong. They treat the language model as the default execution engine for everything. Need to calculate Mechanical Specific Energy? Send it to the LLM. Need to fetch well header data from an API? Send it to the LLM. Need to convert units? LLM.

This is expensive, slow, and introduces unnecessary uncertainty into operations that are entirely deterministic. An MSE calculation has a known formula. An API call has a defined endpoint. A unit conversion is pure arithmetic. None of these benefit from probabilistic reasoning — they’re degraded by it.

The execution hierarchy is a design principle that matches each operational task to the simplest, most reliable method capable of completing it — from deterministic code at the base to human decision at the top — ensuring that AI reasoning is only invoked when simpler methods are insufficient. A well-designed multi-agent system uses a six-level execution hierarchy, from fully automated to human-only, that matches the tool to the task:

  1. Deterministic Code — API calls, calculations, data retrieval, unit conversions. These run first because they’re fast, reliable, and produce exact results. The vast majority of operational data tasks fall here.

  2. Scripted Automation — Batch operations, file processing, data transformations. Structured but more complex than a single function call.

  3. LLM Reasoning — Analysis, interpretation, pattern recognition, natural language generation. This is where the language model shines: making sense of data that’s already been retrieved and validated by the deterministic layers above.

  4. Multi-Step Procedures — Orchestrated workflows that combine deterministic and reasoning steps in a defined sequence. An offset well analysis, for example, uses deterministic code to fetch and calculate, then LLM reasoning to identify patterns and generate insights.

  5. Multi-Agent Deliberation — Complex decisions where multiple domain perspectives are needed. The drilling agent and completions agent might have different views on a casing design choice. Rather than one agent making the call, they each provide their analysis, and the orchestrator synthesizes or escalates.

  6. Human Decision — The final authority for anything safety-critical. Well control decisions, barrier acceptance, regulatory filings, chemical program changes. The system surfaces information; the human decides.

The key insight is that each level is tried before escalating to the next. If the answer can come from a calculation, it never touches the LLM. If a single agent can handle it, it never goes to multi-agent deliberation. If any agent can handle it, it never goes to a human.

This isn’t just about efficiency. It’s about appropriate confidence. When the system tells you the MSE is 42 ksi, you know that came from a formula, not a prediction. When it tells you the most likely stuck pipe mechanism is differential sticking with high confidence, you know that came from a diagnostic framework informed by real-time data. The provenance of every output is clear.

Execution Hierarchy: Key Takeaways

AI Safety in Energy: Architecture, Not Afterthought

In most AI systems, safety is handled through prompting. “You are a helpful assistant. Do not make dangerous recommendations.” This is, to put it charitably, inadequate for safety-critical operations.

“The system doesn’t need to be told not to make well control decisions. It structurally can’t.”

A safety escalation matrix maps each category of operational decision to a required level of human involvement, based on consequence severity and reversibility. In AI-augmented energy operations, this matrix is enforced architecturally — not through prompt instructions — so that agents structurally cannot exceed their authority.

In a hierarchical multi-agent system, safety is structural. Every agent operates within a defined charter that explicitly states what falls outside its authority. The drilling agent, for example, can analyze stuck pipe symptoms and recommend freeing actions — but it cannot authorize those actions. It can calculate optimal drilling parameters — but it cannot change them. It can generate a well control recommendation — but it absolutely cannot execute one.

These aren’t prompt instructions that might be overridden by a cleverly worded query. They’re architectural boundaries enforced by the system. The agent literally does not have the tools to take actions outside its scope.

This maps to what the industry already understands: an AI safety escalation matrix for energy operations.

Decision Type System Behavior
Data retrieval and calculations Fully automated — no human intervention needed
Parameter optimization suggestions Automated with notification — human is informed
Operational parameter changes Human approval required before execution
Chemical program modifications Manual only — system provides analysis, human acts
Well control responses Manual only — system provides diagnosis, human decides
Barrier status changes Manual only — system monitors, human authorizes
Regulatory submissions Manual only — system drafts, human reviews and submits

The gradient from full automation to manual-only isn’t arbitrary. It maps directly to consequence severity and reversibility. You can automate data retrieval because getting it wrong means re-running a query. You can’t automate well control because getting it wrong means a potential blowout.

This is what I mean by safety as architecture. The human-in-the-loop isn’t a limitation of the system — it’s a feature of it.

AI Safety Architecture: Key Takeaways

Institutional Memory for AI Agents in Energy Operations

The final piece that most AI implementations miss entirely is institutional memory. Current approaches treat every conversation as starting from zero, or at best, stuff some documents into a retrieval system and hope the right context surfaces.

Operational teams don’t work this way. They maintain distinct types of knowledge:

What happened — Specific events, decisions, and outcomes. The stuck pipe incident on Well 47 in the Bakken, what caused it, what was tried, what worked, what didn’t. Post-well reviews, NPT analyses, near-miss reports. This is experiential knowledge — the kind that takes years to accumulate and walks out the door when experienced engineers retire.

What we know — Industry standards, regulations, company specifications, vendor data sheets. API standards for casing design. Regional regulatory requirements for abandonment. Formation-specific drilling parameters from offset wells. This is reference knowledge — stable, authoritative, and ideally version-controlled.

How we work — Approved standard operating procedures, decision trees, best practices, templates. The company’s approach to running casing. The preferred stuck pipe response protocol. The daily reporting template. This is procedural knowledge — the codified way the organization operates.

A well-designed memory system maintains these as distinct tiers because they serve different purposes and have different update patterns. Events are captured continuously during operations. Standards are updated when regulations or specifications change. Procedures evolve as the organization learns and improves.

When the system analyzes a new stuck pipe event, it doesn’t just apply generic knowledge. It retrieves relevant episodes from similar wells, checks applicable standards for the formation and hole section, and follows the organization’s approved diagnostic procedure. The recommendation carries the weight of organizational experience, not just model training data.

“The system gets smarter with use, not just with model updates.”

More importantly, every new event becomes a future learning. The outcome of today’s stuck pipe diagnosis — what mechanism was identified, what actions were taken, whether they succeeded — gets captured as a new episode that will inform the next occurrence.

AI Memory Systems: Key Takeaways

AI Maturity in Energy: From Document Retrieval to Decision Support

If I’m being honest about where the industry is today, most “AI in energy” deployments are sophisticated document retrieval systems. Ask a question, get an answer sourced from your data. That’s valuable — but it’s the first rung on a much taller ladder.

What I call the AI Operations Maturity Model describes five levels of AI capability in energy operations, from basic document retrieval to semi-autonomous operations. The progression looks something like this:

Level 1: Retrieval — “What does our standard say about casing design for this formation?” The AI finds and summarizes relevant documents. Most organizations are here.

Level 2: Analysis — “Given offset well data, what are the key risks for this well?” The AI doesn’t just retrieve — it calculates, compares, and identifies patterns across multiple data sources.

Level 3: Recommendation — “We have stuck pipe at 12,500 feet. What’s the most likely mechanisms for being stuck and what should we try?” The AI applies a diagnostic framework, weighs evidence, and provides ranked recommendations with confidence levels.

Level 4: Coordinated Decision Support — “Given the completions plan, the current drilling parameters, and the formation prognosis, what’s the optimal approach for this section?” Multiple domain agents contribute their analysis, coordinated by an orchestrator, with a synthesized recommendation that accounts for cross-domain trade-offs.

Level 5: Semi-Autonomous Operations — The system continuously monitors operations, identifies emerging issues before they become problems, and takes routine actions within pre-approved parameters — while escalating anything outside its authority to human operators.

Each level requires more sophisticated architecture and technology. You can get to Level 1 with a single agent and a vector database. You can maybe stretch to Level 2 with good tooling. But Levels 3 through 5 require the kind of hierarchical, multi-agent architecture I’ve described — domain specialists, structured procedures, safety boundaries, institutional memory, and human-in-the-loop escalation — along with the leadership capabilities to support them.

The energy industry doesn’t need to leap to Level 5 overnight. But it does need to design systems with Level 5 in mind, so that today’s investments in AI infrastructure compound rather than becoming technical debt that needs to be replaced.

AI Maturity Levels: Key Takeaways

The Path Forward for AI in Energy Automation

The energy sector is at an inflection point with AI. The technology is capable enough. The data infrastructure, while imperfect, is increasingly accessible. The economic incentive — reducing NPT, optimizing operations, capturing institutional knowledge before it retires — is compelling. But the goal should be augmenting human capability, not replacing it.

What’s been missing is an architectural approach that takes the domain seriously. That recognizes you can’t bolt a chatbot onto a drilling operation and call it digital transformation. That understands the difference between a system that can generate plausible text about well control and a system that knows it must never, under any circumstances, make a well control decision autonomously. I discussed some of the common pitfalls of these deployments with Geoffrey Cann on a recent podcast — the lessons apply directly here.

Hierarchical multi-agent systems aren’t the only answer. But they represent a fundamentally different way of thinking about AI in safety-critical operations — one that starts with the question “what should this system not do?” rather than “what can we get this model to do?”

The shift from “what can AI do?” to “what should AI never do?” is where real progress starts.

Where is your organization on the maturity ladder? And more importantly — are you designing your current AI investments to climb it, or will you have to start over?


I’ll be exploring these concepts in depth at the 3rd Annual AI in Energy Summit in Houston on February 25, 2026, in my session “Architecting Toward Autonomous Operations — Hierarchical Multi-Agent Systems for Scalable Energy Automation.” If you’re attending, I’d welcome the conversation.


Frequently Asked Questions

What is a hierarchical multi-agent system? A hierarchical multi-agent system is an AI architecture where a central orchestrator coordinates domain-specific AI agents, each with defined scopes and safety boundaries, to handle complex operational decisions that no single agent can safely manage alone.

Why don’t single-agent AI systems work for energy operations? Energy operations require simultaneous expertise across multiple domains (drilling, completions, well integrity), safety-critical decision stratification, and institutional memory. A single agent with one context window cannot provide depth, safety, and breadth simultaneously.

What is the execution hierarchy in multi-agent AI? The execution hierarchy is a design principle that routes each task to the simplest reliable method: deterministic code first, then scripted automation, then LLM reasoning, then multi-agent deliberation, and finally human decision. This ensures AI reasoning is only used where simpler methods are insufficient.

How mature is AI adoption in the energy sector? Most energy AI deployments are at Level 1 — document retrieval. The AI Operations Maturity Model describes a progression through analysis (Level 2), recommendation (Level 3), coordinated decision support (Level 4), and semi-autonomous operations (Level 5), each requiring progressively more sophisticated multi-agent architecture.


If you found this valuable, you might also be interested in:


Want more insights on AI strategy and digital transformation? Subscribe to my monthly newsletter for analysis that cuts through the hype and focuses on what actually works.