If you can't review it, don't get AI to do it
If you can't review it, don't get AI to do it is a good mantra to follow when you're thinking about whether a project is suitable for AI assistance. It's also a snazzy chorus for an AI generated song.
It's easy to produce impressive-looking output from AI models like ChatGPT, Copilot, and Gemini. But unless you or someone else can read, understand, and validate its truth, authenticity, and value, you’re at risk of generating a useless virtual paperweight, or even worse, making poor decisions that could harm you, your organisation, or relationships with your colleagues and clients
That’s why at Versantus, we believe that humans in the loop are critical. We use AI tools widely, and they’re a huge benefit, but they cannot (yet) replace our wisdom, context, critical thinking and adaptability. We use them to speed through low value tasks, generate ideas, support data analysis and coding, and to give us more time on the higher-value, more human, and more empathetic part of our work.
What was knowledge work like before the rise of AI?
Think back to the "before times" - 2023. When solving a tricky problem you would typically sit down with a blank document, define the problem or opportunity, tap into your knowledge (and that of others, including Googling), use your wisdom and critical thinking, experiment with solutions, and evaluate them before making a decision on a course of action.
- Define the problem
- Gather knowledge and context
- Ideate
- Explore solutions
- Evaluate them
- Decide and act
When using AI, it’s easy to skip most of this process. But beware: although the results can still look impressive, if you skip the context and evaluation phases, you could easily be led down the wrong path.
If the process looks like this, you have a potential problem.
- Define the problem
- AI gives the answer
- Act
For example, asking a large language model (LLM) to write a company strategy without providing the right context can result in a plan that looks realistic, but that lacks the necessary context, knowledge, and wisdom.

So what’s the answer?
It's not hard! You just need to use the right tools, provide good context, review the answers the AI gives you, and own the solution.
1. First ask: is this a good use of AI?
AI tools are incredibly helpful in many areas of life and work: holiday planning, stories for the kids, recipe making, DIY guides, summarising reports, creating images, writing blogs... but they're not the best tool for everything, and you should always consider whether what you're doing is going to benefit from using an AI tool. If not, skip it.
2. Garbage in, Garbage out - good prompting means good context
In the before-times of Google search, you might get away with a quick ‘how to marketing strategy’ search. The critical thinking and evaluation was then being done by you based on a series of options contained in hundreds of pages of links.
When you use an AI chatbot, it will give you *the answer* (singular) rather than a list of links, so it’s important that you understand the research it’s done, and how it reached that answer, not just skip this critical thinking step altogether.
Prompts are what you put into an AI chatbot - the search terms, questions, demands you give it. Slightly different prompts can give very different answers, and for a while now people in the AI space have been talking about “Prompt Engineering” and how subtle but important tweaks to your input questions, you can get much better results. “You are a world class marketing strategist, plan out a 12 month plan for my soft drink brand. Evaluate against my competition and break down the plan by month and by quarter, with clear goals and KPIs.” is more likely to give a thoughtful and thorough answer than “How can I sell more cans of soda?”.
The term “Context Engineering” is now becoming more popular. Context Engineering has a wider meaning involving not just the prompt, but also the background information, the previously stored information, and any additional input data that the AI tool might use, such as search results from a Retrieval Augmented Generation (RAG) system or Model Context Protocol (MCP) database. The idea is that you should focus on providing a wider range of inputs rather than just focusing on the specific phrases you use.
Prompt Engineering is about phrasing your question cleverly.
Context Engineering is about supplying the right background information.
💬 Prompt example (bad):
“How can I sell more cans of soda?”
💡 Prompt example (better):
“You are a world-class marketing strategist. Develop a 12-month plan to boost my UK soft drink sales. Include competitive analysis and KPIs by quarter.”
📎 Context Engineering adds:
• Your target audience
• Market data
• Prior marketing campaigns
• Brand tone and voice
• Any legal or strategic constraints
Treat the AI as a human colleague that you want to get the best out of, and don’t aim for a ‘one hit’ perfect prompt. Instead, treat the AI as you would treat someone from your team: give them a brief, including any background guidance such as brand guidelines, research papers, notes and images, then answer any questions they have, review the results, and iterate. The goal is to impart your knowledge, wisdom, and context to the AI for a more suitable response, but it’s OK to treat it as a conversation, and it’s OK to go into more detail. If you give your colleague or your AI no context, they can’t give you the result you need.
3. Oversee and review it
To mitigate the risk of doing the wrong thing, you should always be able to evaluate the outputs of the LLM. This evaluation process will vary in effort depending on the type of work you’re doing:
When reviewing AI output, ask:
- Does it align with our goals?
- Is it correct and credible?
- Does it reflect our tone and brand?
- Would I be confident putting my name to this?
A real world example - this blog post
Take this blog post for example. I could have asked ChatGPT to “Write a blog post about ‘humans in the loop’” and I would have quickly had an article that could go “straight to print”. But it wouldn’t be *my article* with *my thoughts*. And that’s what you pay me for.
So what I did instead was this:
- Voice note: 10 minutes
- Letterly tidy-up: 10 minutes
- Editing and re-writing: 60 minutes
- Imagery, SEO, titles: 15 minutes
- Colleague review: 10 minutes
- Publishing online: 2 minutes
I was still able to have an idea, generate a blog post, and publish it within a few hours, whereas before it would have taken several more people and much more time.
You can decide if you prefer mine or ChatGPT’s, but whatever you think (pick me! pick me!), the one you’re reading now is all my own idea, with my experience and knowledge, my critical thinking, and my waffling. I get paid for this stuff*, so it’s important I play my part.
* incredible, right?!
4. You make it, you own it
You’re paid to do a job. AI is your intern, not your scapegoat, and although LLMs can help you move faster than ever, you can’t delegate your job away to them. You are responsible for your work, regardless of the colleagues, tools, and prior research you rely on.
Just as you wouldn’t submit an intern’s unreviewed draft to a client, you shouldn’t rely on AI without adding your own judgement.
Similarly, if you use Gemini to plan out a 6 month marketing strategy for your organisation, you should not blame the AI if your sales funnel stays empty. You are in charge, and you are responsible for the outputs and the outcomes.
Wrap it up, Nik!
In short, if you can't review the AI-generated output, you shouldn't delegate the task to AI. Without being able to evaluate the reasoning and outputs, you risk generating poor-quality content that may lead to unwanted outcomes when acted upon.
Use AI to move faster - but don’t lose sight of the human intelligence behind great work.
At Versantus we've been solving real-world problems with AI for 3 years.
If you want to understand how you can use these tools to reduce your low-value work, speed up delivery, and improve quality for your clients, contact us today to book a free 30 minute consultation with one of our specialists.