Intro: Smarter Than a Goldfish?

Imagine hiring a bright young intern. First day on the job, they do OK. But every morning, they arrive with total amnesia. No memory of yesterday’s meeting. No context on clients. No understanding of what the team’s even working toward. How long would they last?

That’s how most AI systems operate today.

If you’ve experimented with AI agents in your finance team, portfolio operations, or due diligence workflows, you’ve likely seen the same thing: they’re fast, but forgetful. Powerful, but sometimes oddly dumb. Why? Because while prompt engineering helps you ask better questions, it does nothing to help AI remember what matters.

This is where context engineering enters the frame - and if you're an executive navigating the AI noise, it’s the most important concept you've probably never heard of.

AI that actually “gets it” isn’t just about picking the right model. It’s about feeding that model the right context - the right information, at the right time, in the right way.

As Andrej Karpathy put it, "The hottest new programming language is English". But what he said next was more important: “Prompt engineering is the new coding. And context engineering is the new prompt engineering.”

So let's have a look at that then, shall we?


What Is Prompt Engineering (and Why It’s Not Enough)?

Prompt engineering is how most people first interact with large language models (LLMs). It’s about learning how to “talk” to the AI: giving it structured instructions so it can generate useful outputs. Think of it like briefing a junior analyst: if you’re clear and specific, you get a decent first draft.

This worked well in the early days. Ask ChatGPT to summarize a report, and it does. Ask it to generate investment memos from bullet points, and you might even get something you could tweak into shape.

But - and there's always a but - prompt engineering works best in isolation. Each task is a one-off. The AI doesn’t remember what you asked yesterday. It doesn’t understand your industry, or that you prefer summaries in bullet form, or that a company called “Acme Holdings” is part of your portfolio.

You essentially have to build strategy from scratch every morning.

Prompt engineering is helpful. But it’s not scalable for real-world, repeatable business use. You need to go beyond clever prompting. You need context engineering.


What Is Context Engineering?

Context engineering is what turns an LLM from a smart chatbot into a functioning, intelligent assistant. It’s the discipline of designing what the AI knows when it starts thinking - not just what you ask it.

That includes:

  • Relevant documents or data sources (e.g., CRM records, deal flow history)
  • Task history or previous user interactions
  • Company goals, tone of voice, brand language
  • Business rules or standard operating procedures

In short: it’s everything around the prompt that makes the AI useful in your business setting.

Think of it like prepping that same intern for week two on the job. Now they show up with notes from last week, access to the data room, an understanding of the firm’s strategy, and a sense of your communication style. Suddenly, they’re not just guessing. They’re thinking.

Modern context engineering uses several techniques to do this:

  • Retrieval-Augmented Generation (RAG): The AI pulls in relevant info from indexed sources before answering.
  • Memory: It recalls prior interactions with you and adapts accordingly.
  • Orchestration frameworks: These tools route tasks, build context stacks, and maintain workflows across multiple interactions.

If this sounds technical, that’s because it is. But the key takeaway for executives is simple: don’t just ask what the model can do - ask what it knows when it does it.


Key Differences: A Tale of Two Analysts

Let’s bring this down to a story.

You’re running diligence on a mid-market logistics firm. You’ve got two AI agents trained to assist.

  • Agent A works off a prompt: “Summarise the key risks in this company’s P&L.”
  • Agent B is context-engineered. It’s fed recent deal memos, your fund’s risk tolerance framework, past decisions, even your preference for summary style.

Agent A gives you a generic five-paragraph summary that could’ve been pulled from a McKinsey report in 2015. Agent B flags discrepancies in EBITDA treatment, cross-checks freight assumptions against sector benchmarks, and highlights a red flag you’d missed in the vendor’s assumptions.

Which one are you keeping?

That’s not just a snazzy use case. That’s the difference between a tool and a teammate.

Context is how you turn LLMs into repeatable, dependable support systems - not just one-off toys.


Real-World Benefits for Execs

The reason context engineering matters for busy leaders isn’t technical. It’s operational.

Here’s what it unlocks:

  • Faster, more relevant output. When AI knows what you care about, you spend less time rewriting fluff. McKinsey notes that up to 60% of work time is spent searching for or synthesizing information - context-aware AI cuts that dramatically (McKinsey, 2023).
  • Fewer hallucinations. Context-rich systems are less likely to “make stuff up,” since they anchor responses to your data, not general internet guesses (Gartner, 2023).
  • Smarter workflows. Whether it’s market landscaping, writing investment briefs, or prepping board updates, context engineering lets you build assistants that think the way your business does.

This isn’t pie-in-the-sky AI theory. These are practical wins - especially when your team is lean, your margins matter, and your time is the bottleneck.


Getting Started: What Business Leaders Should Do Now

So what should you do if you want your AI agents to be more than glorified interns?

Here’s where to start:

  1. Map the workflows that rely on repeat context. Think deal reviews, pipeline updates, monthly ops reporting. These are ripe for context layering.
  2. Ask vendors about context, not just capabilities. Everyone loves to sell model size or latency. But what you really need to know is: Can this system remember what I care about?
  3. Start small, design for scale. Maybe you begin with an AI-powered portfolio tracker or memo generator. Great. But make sure it’s built to learn and evolve - not just repeat prompts.
  4. Treat context as a business asset. It’s not just “tech.” Your operating model, your tone of voice, your deal heuristics - these are all context layers that smart AI can use.

The best AI setups aren’t the flashiest. They’re the ones that quietly understand what matters to you.


Conclusion: The Hidden Lever of Great AI

Most execs are told to focus on the model. Pick GPT-4 over Claude. Pay for better latency. Fine-tune your data.

But the smartest move might be simpler: make sure the AI knows what you know.

Context engineering doesn’t require you to become a coder. It just asks you to think like a strategist: What does this system need to know in order to perform well, consistently, in my business?

Because in the end, even the best AI is only as good as what you feed it.

Want to see how context engineering could level up your AI workflows? Contact our team - we’ll help you build AI that actually gets it.