AI NewsletterSubscribe →
Getting Started Guide
7Step 7 of 10

The Real Skill: Direction and Judgement

Prompting is built into the workspace. The critical skill is knowing what you want, understanding the process, and assessing the quality of what comes back.

6 min read

The prompt engineering industry sold you the wrong skill. Between 2023 and 2025, an entire cottage industry grew up around teaching people how to talk to AI. Prompt libraries were products. "Prompt engineer" was a job title. Courses charged hundreds of euros to teach chain-of-thought reasoning and few-shot example formatting. I ran some of those workshops myself, so I'm not throwing stones from a distance. At the time, it made sense. The interface was a blank text box. Everything depended on what you typed into it.

That world is already gone. If you're using Claude Code with a structured workspace, the prompting is done. It's embedded in the architecture, pre-written and pre-tested. The skills files contain step-by-step workflows. The rules files enforce your standards automatically. The CLAUDE.md files carry context that you'd otherwise type from scratch every session. When someone on your team types /proposal and a client proposal starts generating, they're not prompting. They're running a system that somebody else built.

The AI skills for business that actually matter now are older and more familiar than you'd expect. They're about knowing what you want, understanding how the work gets done, directing a fast but junior executor, and assessing whether the output is any good. That's process leadership. It's the same set of capabilities you use when you manage people, brief freelancers, or review work from contractors. The difference is speed. Claude responds in seconds, not days.

Think of it as managing a brilliant junior

I've trained dozens of business teams on Claude at this point, across professional services, education, property management, and marketing. The metaphor that clicks every time is this: Claude is a hyper-fast, technically gifted junior office assistant. I introduced this idea in What Claude Actually Is, and it matters even more when we're talking about the skills you actually need.

Your junior assistant can draft documents, write code, analyse data, restructure content, summarise reports, and perform calculations faster than any human you'll hire. They don't get tired or irritable. But they're junior. They need you to define the task. They need context about your specific business, your clients, your standards. They need you to check their work and redirect when they drift off course. The technical ability is extraordinary, but it's only as useful as your capacity to direct it and assess what comes back.

A manager who can't evaluate the work of a junior hire isn't a good manager, regardless of how talented the junior is. Same applies here. The skill isn't in teaching Claude how to write. Claude already knows how to write. The skill is in knowing what you want written, why, for whom, and whether the result meets your standard. That's a fundamentally different capability from crafting the perfect prompt.

The four capabilities that actually matter for AI in business

If prompting isn't the daily skill, what is? I keep coming back to four capabilities. None of them are technical. All of them are things you've probably been developing for years through your professional experience.

1. Knowing what you want

Before you engage Claude, you need clarity on the outcome. Not a vague sense of direction, but a specific picture of what "done" looks like.

"Write me something about marketing" is a vague instruction. Claude will produce something, but it'll be generic because you gave it nothing specific to work with. Compare that to: "Write a 500-word email to existing clients announcing our new service, emphasising the cost saving compared to their current approach, in a conversational but professional tone." Claude can execute that immediately because you've defined the outcome, the audience, the angle, and the style.

The pattern I see repeatedly is that precision of outcome determines quality of output. A solicitor who asks for "a letter about that planning issue" will get a worse result than one who says "draft a letter to Fingal County Council objecting to planning reference 24/1847 on grounds of overlooking and traffic impact, citing our client's property at 14 Elm Road." That's not a prompting technique. That's knowing your own business well enough to articulate what you need. You've been defining tasks for colleagues, contractors, and suppliers for years. The same skill transfers directly.

2. Understanding the process

You need to know the steps from start to finish. If you're producing a client proposal, you need to know what goes into a proposal, in what order, in what tone, and what the client expects to see. Claude can execute each step, but you need to know the sequence and the standards.

This is where domain expertise becomes essential, and where Claude AI direction really separates good use from bad. A project manager who has delivered fifty projects knows whether a plan is realistic. They know what gets missed, where timelines slip, and what clients actually care about versus what they say they care about. Claude doesn't have that experiential knowledge. It has pattern-matched across millions of documents, which gives it a useful starting point, but it doesn't replace twenty years of knowing how things actually work in your industry.

When you understand the process, you can break work into steps that Claude handles well. You know which parts need human judgement and which parts are mechanical. You know where the risks sit. That understanding is what makes the collaboration effective, not how cleverly you phrase the instruction.

3. Directing the work

Effective use of Claude isn't a single instruction followed by a finished product. It's a conversation. You give an instruction, read what Claude produces, give feedback, ask for changes, provide additional context. The back-and-forth is where the quality comes from.

"This section is too formal. The client is informal, make it conversational." "Add a section on timeline, they always ask about that." "The cost figures are wrong, here are the correct ones." "Good structure, but section three is too long. Cut it in half and move the detail to an appendix."

This is active engagement, not passive consumption. You're directing the work exactly as you'd direct a junior colleague. The difference is that Claude responds in seconds rather than hours, which means you can iterate much faster. A document that might take three rounds of review over two days with a human can go through ten rounds in thirty minutes. The skill here is giving clear, specific feedback. "Make it better" tells Claude nothing. "The opening paragraph should lead with the cost saving figure, not the feature description" tells Claude exactly what to change.

4. Assessing quality

This is the most critical of the four, and the one that separates experienced professionals from everyone else using the same tool. Can you look at what Claude produced and judge whether it's good enough? Does it meet the standard? Is it accurate? Is it appropriate for the audience? Would you put your name on it?

If you can't assess the quality, you can't use AI effectively. Full stop. A business owner who sends out a client proposal without reading it properly is taking a risk whether a human or an AI wrote it. The same goes for financial analyses, legal documents, marketing copy, and technical specifications. Someone with the relevant expertise needs to review the output, every time.

This is why AI doesn't replace expertise. It amplifies it. A marketing manager with ten years of experience can review Claude's copy in two minutes and know whether it'll land. A graduate with six months of experience might miss the issues entirely. Both used the same tool. The difference in outcome comes from the human, not the AI. Quality assessment means checking accuracy (are the facts correct?), appropriateness (is this right for the audience and context?), completeness (is anything missing?), and tone (does this sound like us?). These are judgement calls that require domain knowledge built over years.

Domain expertise beats AI expertise, and it's not close

There's a persistent idea that being good at AI requires technical understanding. That you need to know how large language models work, what tokens are, how temperature settings affect output, or what retrieval-augmented generation means. For the person building AI systems, that's true. For the person using AI as a business tool, it's largely irrelevant.

Consider what domain expertise actually looks like in practice. A solicitor with no AI training but twenty years of experience can assess whether a draft contract clause is sound. They know what's missing, what's ambiguous, and what wouldn't hold up. No amount of prompt engineering knowledge substitutes for that. A marketing manager who has run campaigns for a decade knows whether copy will land with the target audience. They know the difference between clever and effective. An accountant with fifteen years of practice can review a financial summary and spot the figure that doesn't make sense, because they know what ratios should look like and what patterns indicate errors.

None of these people need to understand how large language models work. They need to understand their own domain. The AI skill, the ability to interact with Claude effectively, is learnable in weeks. The domain expertise takes years. And the domain expertise is what makes AI useful.

This has practical implications for hiring and training. If you're choosing between investing in AI training for your experienced team or hiring "AI specialists" with no domain knowledge, invest in your experienced team every time. The domain expertise is the scarce resource. The AI knowledge isn't.

Prompting isn't dead. It's embedded.

I want to be clear about what I'm not saying. "Prompting doesn't matter" is not the argument. Prompting matters a great deal. What's changed is where it lives.

In a well-built Claude Code workspace, the prompting is in the infrastructure. Skills files are prompts. They contain the full workflow for a task, step by step, with checkpoints and quality standards. When you run a skill, you're running a pre-written prompt sequence that's been tested and refined. Rules files are prompts too. They enforce standards automatically; every time Claude starts a session, it reads your rules and applies them without being asked. UK English, no jargon, specific formatting requirements, all handled. CLAUDE.md files are prompts. They provide context about your business, your projects, your preferences, your conventions. Reference files and examples are prompts. They give Claude material to work from, a style to match, a structure to follow.

The workspace is a prompt library that runs without the user thinking about it. The person who builds and maintains that workspace, who writes the skills, defines the rules, and structures the CLAUDE.md files, is doing prompt engineering. They're doing it once, and everyone who uses the workspace benefits from it.

For the daily user, the interaction is simpler. You say what you need. The workspace handles the how. "Run the content engine on this topic." "Grade these assignments." "Write a proposal for this client." The skills, rules, and context do the heavy lifting. The daily user's skill set is about process leadership, not prompt craftsmanship.

What this means for training your team

If you're rolling AI out across a team, this framing changes your training approach entirely. The pattern I see in organisations that get AI adoption right is that they focus on what their people already know, not on teaching them something new.

Training should focus on task definition: how to describe what you want clearly enough for Claude to execute. This is the same skill as writing a good brief for a freelancer or a clear email to a colleague. It should focus on quality assessment: how to review what Claude produces and judge whether it meets the standard, building on existing professional judgement. It should cover feedback and iteration: how to engage in a productive back-and-forth with Claude using specific feedback and clear corrections. And it should reinforce process knowledge: understanding the steps in a workflow well enough to guide Claude through them.

Training should not focus on prompt engineering techniques like chain of thought, few-shot examples, or role prompting. Those are embedded in the workspace. It shouldn't cover how large language models work technically. Interesting, but not necessary for effective daily use. And it shouldn't involve memorising prompt templates. The workspace carries the templates.

You need one person, or a small team, who understands the workspace deeply. They build and maintain the skills, rules, and CLAUDE.md files. They understand how Claude processes instructions and how to structure the workspace for best results. This is the "workspace builder" role, and it does require deeper Claude knowledge. Everyone else needs the four capabilities I've outlined: knowing what they want, understanding the process, directing the work, and assessing quality. These are professional skills they already have. The training is about applying them in a new context, not learning an entirely new discipline. For a structured approach to team deployment, see Rolling Out to Your Team.

The real competitive advantage

This reframing matters because it changes where competitive advantage comes from. If AI skill were primarily about prompting, the advantage would go to whoever writes the best prompts. That would be a technical arms race, and the person with the most AI knowledge would win.

But when the daily skill is direction and judgement, the advantage goes to whoever has the deepest domain expertise, the clearest processes, and the best professional judgement. That's a fundamentally different competition, and it favours experienced professionals and well-run businesses over early adopters and AI enthusiasts.

The solicitor who knows contract law inside out will produce better AI-assisted contracts than the solicitor who spent a week on a prompt engineering course. The marketing manager who understands their audience will produce better AI-assisted campaigns than the one who memorised prompt templates. The project manager who has learned from fifty projects will produce better AI-assisted plans than the one who can explain how transformers work.

Your years of experience aren't made obsolete by AI. They're made more productive. The depth of your knowledge, the quality of your judgement, the clarity of your standards; these are the inputs that determine the quality of the output. Claude provides the speed and the technical execution. You provide the direction and the quality bar.

That is the real skill, and it's one you've been building your entire career.

GenAI Skills Academy

Achieve Productivity Gains With AI Today

Send me your details and let’s book a 15 min no-obligation call to discuss your needs and concerns around AI.