AI buzzwords decoded: A practical guide for product teams

Understand the terms behind the tools — from MCPs and tokens to agents and RAG — so you can collaborate confidently, ask smarter questions, and make better product decisions.

AI is now part of how most product teams operate — from generating summaries and prioritization lists to powering copilots and customer-facing features. But many of the terms flying around in AI discussions are still a mystery to non-engineers.

This guide explains the most relevant AI concepts you’re likely to encounter and shows how they appear in real workflows. The goal: help your team cut through jargon and use AI more effectively.

1. Source prompts: The hidden script that sets the AI’s behaviour

A source prompt is the permanent instruction baked into an AI tool that tells the model how to act, regardless of what you type. It sets the tone, level of detail, and rules for the output.

Example in your workflow:

You use an AI assistant to draft release notes, but the results feel flat. The source prompt might be “Write concise, formal updates.” Changing it to “Write in a clear, upbeat tone with short sentences” can immediately change the results.

Why it matters:

If an AI tool consistently produces unhelpful output, check whether you can view or edit its source prompt. This often fixes problems faster than switching models.

2. RAG (Retrieval-Augmented Generation): Pulling in the right information before answering

RAG is a method where the AI retrieves relevant information from a specified source — such as a product knowledge base, customer database, or internal docs — before generating a response. Without RAG, the model relies only on its pre-trained data, which may be outdated or incomplete.

Example in your workflow:

You launch an AI-powered help centre. If the RAG setup points to your latest documentation, customers get accurate answers. If it’s pointed at an outdated archive, they won’t.

Why it matters:

If answers are wrong or missing detail, the issue may be in the retrieval setup, not the model.

3. MCPs (Model Context Protocols): Structured context for consistent behaviour

An MCP is a structured file or set of parameters that defines the AI’s role, allowed actions, and available tools or data. It’s like an operating manual the model follows for a specific use case.

Example in your workflow:

A support bot’s MCP might specify: “Answer only from approved documentation. Decline questions outside this scope. Use a polite, concise tone.” Engineers can connect multiple MCP-defined agents so one fetches data and another formats the answer.

Why it matters:

When AI output feels inconsistent, reviewing the MCP shows whether its role and constraints are clear enough.

4. Agents: AI that can take actions, not just answer questions

An agent is an AI system that can make decisions and carry out multi-step tasks using external tools, APIs, or systems.

Example in your workflow:

An onboarding assistant could:

  1. Look up a customer record in your CRM.
  2. Create a tailored onboarding checklist.
  3. Email it to the customer.

Each step is a decision and an action.

Why it matters:

Agents can be powerful but risky. You need to know what systems they can access and what safeguards are in place.

5. Tokens: The unit AI models use to measure work

A token is a chunk of text (about four characters in English) the model processes. Tokens are used for both inputs (prompts) and outputs (responses), and most models have a maximum number they can handle at once. Token usage also affects cost and speed.

Example in your workflow:

If you paste an entire PRD into a prompt, you pay to process every token, and you might hit the model’s limit, causing the answer to be cut off.

Why it matters:

Knowing how many tokens your requests use helps you control costs, reduce latency, and avoid incomplete results.

6. Prompting vs. fine-tuning: Two ways to change output

Prompting is adjusting the instructions you give the model. Fine-tuning is retraining the model with new data.

Example in your workflow:

If your chatbot overuses technical jargon, try prompting it with “Explain at a sixth-grade reading level.” Only consider fine-tuning if you have highly specialised language or workflows that prompting alone can’t handle.

Why it matters:

Prompting is faster, cheaper, and reversible. Fine-tuning is costly and harder to change — use it only when there’s a clear need.

What to explore next

If you’re curious and want to go deeper:

  • Test prompt variations and track the results.
  • Check what’s in the source prompts of tools you already use.
  • Explore RAG with your own data to see how retrieval quality changes output.
  • Learn how orchestration frameworks like LangChain or Inngest chain AI tasks.

A recommended free starting point: deeplearning.ai

Final thought

You don’t need to build AI infrastructure to work effectively with it, but you do need to understand the moving parts. Being able to explain MCPs, RAG, agents, and tokens in plain language makes you a stronger partner to engineering and helps you shape better product decisions.

Get started with Supernova today

Unlock the full potential of your design system with Supernova, empowering you to drive innovation, collaboration, and seamless scalability.

Why product ideas die in the docs

In this article we explore why so many promising product ideas lose momentum after being documented — and how teams can turn their docs into tools for real progress.

What great PMs do that AI never will

AI can speed up the work, but it still can’t lead. This article breaks down what great product managers do that no language model can replace.

5 docs PMs spend too much time writing (and how to replace them)

In this post, we unpack 5 docs PMs spend way too much time writing, and how you can replace them with faster, smarter, system-connected alternatives.