Understand the terms behind the tools — from MCPs and tokens to agents and RAG — so you can collaborate confidently, ask smarter questions, and make better product decisions.
AI is now part of how most product teams operate — from generating summaries and prioritization lists to powering copilots and customer-facing features. But many of the terms flying around in AI discussions are still a mystery to non-engineers.
This guide explains the most relevant AI concepts you’re likely to encounter and shows how they appear in real workflows. The goal: help your team cut through jargon and use AI more effectively.
A source prompt is the permanent instruction baked into an AI tool that tells the model how to act, regardless of what you type. It sets the tone, level of detail, and rules for the output.
You use an AI assistant to draft release notes, but the results feel flat. The source prompt might be “Write concise, formal updates.” Changing it to “Write in a clear, upbeat tone with short sentences” can immediately change the results.
If an AI tool consistently produces unhelpful output, check whether you can view or edit its source prompt. This often fixes problems faster than switching models.
RAG is a method where the AI retrieves relevant information from a specified source — such as a product knowledge base, customer database, or internal docs — before generating a response. Without RAG, the model relies only on its pre-trained data, which may be outdated or incomplete.
You launch an AI-powered help centre. If the RAG setup points to your latest documentation, customers get accurate answers. If it’s pointed at an outdated archive, they won’t.
If answers are wrong or missing detail, the issue may be in the retrieval setup, not the model.
An MCP is a structured file or set of parameters that defines the AI’s role, allowed actions, and available tools or data. It’s like an operating manual the model follows for a specific use case.
A support bot’s MCP might specify: “Answer only from approved documentation. Decline questions outside this scope. Use a polite, concise tone.” Engineers can connect multiple MCP-defined agents so one fetches data and another formats the answer.
When AI output feels inconsistent, reviewing the MCP shows whether its role and constraints are clear enough.
An agent is an AI system that can make decisions and carry out multi-step tasks using external tools, APIs, or systems.
An onboarding assistant could:
Each step is a decision and an action.
Agents can be powerful but risky. You need to know what systems they can access and what safeguards are in place.
A token is a chunk of text (about four characters in English) the model processes. Tokens are used for both inputs (prompts) and outputs (responses), and most models have a maximum number they can handle at once. Token usage also affects cost and speed.
If you paste an entire PRD into a prompt, you pay to process every token, and you might hit the model’s limit, causing the answer to be cut off.
Knowing how many tokens your requests use helps you control costs, reduce latency, and avoid incomplete results.
Prompting is adjusting the instructions you give the model. Fine-tuning is retraining the model with new data.
If your chatbot overuses technical jargon, try prompting it with “Explain at a sixth-grade reading level.” Only consider fine-tuning if you have highly specialised language or workflows that prompting alone can’t handle.
Prompting is faster, cheaper, and reversible. Fine-tuning is costly and harder to change — use it only when there’s a clear need.
If you’re curious and want to go deeper:
A recommended free starting point: deeplearning.ai
You don’t need to build AI infrastructure to work effectively with it, but you do need to understand the moving parts. Being able to explain MCPs, RAG, agents, and tokens in plain language makes you a stronger partner to engineering and helps you shape better product decisions.
Unlock the full potential of your design system with Supernova, empowering you to drive innovation, collaboration, and seamless scalability.