Skip to main content
Protify

AI

RAG, Fine-Tuning, Prompting: A Non-Engineer's Decision Tree

Three approaches dominate AI-product engineering: prompting, RAG, and fine-tuning. Vendors will tell you their pet approach is the right one. Here's the decision in plain English.

When a non-engineer evaluates AI proposals, three terms keep coming up: prompting, RAG, and fine-tuning. Vendors pitch each as "the right way." None of them is universally right. Each fits a specific class of problem. Here's the decision tree, in plain English, with no model-architecture detours.

Plain definitions

Prompting. You send the model a question or a task, the model responds. That's it. No special infrastructure, no training data, no setup beyond writing good instructions. This is what you do when you use ChatGPT in the browser.

RAG (Retrieval-Augmented Generation). Before sending the question to the model, you look up relevant material — from your company's documents, database, support tickets, product catalog — and include it in the prompt. The model now has access to your business's specific knowledge alongside whatever it already knew.

Fine-tuning. You train the base model on your specific data so it learns your domain — your style, your formats, your judgment patterns. This produces a custom version of the model that behaves the way you want even before you prompt it.

Question 1: Does the model already know enough?

If your use case is about general knowledge, common formats, widely-available information, or generic reasoning — prompt. Don't add complexity you don't need. A surprising number of "AI features" can be built with nothing but a well-crafted prompt and a good model. Most production AI in 2026 looks like this.

Examples that fit prompting: drafting emails, summarizing public articles, generating code in common languages, translating between languages, classifying sentiment, brainstorming.

Question 2: Does it need to know something specific to your business?

If the answer requires knowing your customers, your products, your internal documentation, your support history — RAG. This is the second-most-common pattern in production systems. The model still does the reasoning; you just hand it the right context to reason about.

Examples that fit RAG: a customer support assistant that knows your help docs, an internal research tool that searches your meeting notes, a sales-enablement bot that pulls from your case studies, a product Q&A that grounds answers in your spec sheets.

RAG is usually the right answer when people say "the model needs to know about our company." Fine-tuning sounds like the answer; it almost never is.

Question 3: Do you need to change how the model behaves?

Fine-tuning is for changing the model's behavior — its style, its output format, its judgment, its vocabulary. Not for teaching it new facts (RAG handles that better) and not for one-off tasks (prompting handles that better).

Examples that genuinely benefit from fine-tuning: producing output in a very specific format that's hard to specify in a prompt, mimicking a particular brand voice across thousands of generations, applying domain-specific judgment that requires lots of examples to teach, optimizing cost by getting a smaller model to behave like a larger one for a narrow task.

The most common mistake

Skipping prompting and RAG, jumping straight to fine-tuning. It feels rigorous. It usually isn't. Fine-tuning is expensive, time-consuming, and reduces flexibility — every time the base model improves, you have to redo the training. Most production AI systems start with prompting, add RAG when they need company-specific knowledge, and rarely actually need fine-tuning.

If a vendor proposes fine-tuning before they've tried prompting plus RAG, ask why. The answer is usually that they sell fine-tuning services. That doesn't make them wrong, but it should make you skeptical.

How to use this

When evaluating an AI proposal, ask the vendor: "Did you try just prompting first? What did it get wrong? Did you try RAG next? What did that get wrong?" If they didn't try the simpler approaches, the proposal is probably over-engineered. If they did and have specific reasons each one wasn't enough, you're talking to someone who knows what they're doing.

If you'd like a vendor-neutral read on whether prompting, RAG, or fine-tuning is right for your use case, we're happy to think it through.

Confused by RAG, fine-tuning, and prompting?

We've built all three in production. We'll help you pick the one that fits — not the one our team happens to specialize in.