The Insider Playbook for ChatGPT, Claude, Gemini and Perplexity
Hidden rules. Model-specific techniques. What each model variant actually does. Prompt security. Twelve copy-paste frameworks. Everything the documentation does not tell you.
The gap between a novice and an expert AI user is not about magic prompts. It is about setup.
Here is what the AI companies do not advertise. Every large language model has a hidden architecture of priorities, constraints, and behavioural patterns that determine the quality of its output. Most users interact with the surface layer. This guide takes you underneath it.
When someone types a one line question into ChatGPT and gets a generic answer, they assume the AI is limited. When a power user types a precisely constructed prompt into the same model and gets a response that feels like it was written by the smartest person in the room, they understand something different. The model did not change. The interaction architecture did.
The difference between an expert AI user and an average one is not knowledge of magic prompts. It is the habit of spending 2 minutes setting up context before asking anything. An expert writes a 5 line context block at the start of every important conversation. A novice types one sentence and wonders why the output is mediocre.
Before you ask any substantive question in any model, paste this block and fill it in:
The most widely used model. Also the most misused. Here is what the documentation does not tell you.
ChatGPT uses Reinforcement Learning from Human Feedback, which means it was trained to produce outputs that human raters preferred. This creates a specific pattern. It is wired to produce responses that feel satisfying and complete even when the honest answer is uncertainty. This is why it sometimes sounds more confident than it should be.
ChatGPT has a strong bias toward producing a full answer even when it should say it does not know. Counter this explicitly: tell it "If you are uncertain about any part of this, say so explicitly rather than filling the gap with an assumption."
In ChatGPT's Settings menu there is a section called Custom Instructions. Two boxes. These instructions run as a silent system prompt on every single conversation. Most users have never opened this menu. The people who fill it in get dramatically better outputs every time because the model already knows who they are before they say a word.
After any analysis, add: "Now argue the opposite. What is the strongest case against everything you just said?" This forces ChatGPT out of its confirmation bias and gives you a complete picture of any situation.
Claude thinks differently from other models. Understanding how it reasons is the key to unlocking its full capability.
Claude is built by Anthropic with a strong emphasis on reasoning quality and intellectual honesty. Unlike models optimised primarily for user satisfaction, Claude is trained to express genuine uncertainty, push back on incorrect premises, and think through problems rather than pattern-match to a likely-sounding answer. This makes it the most reliable model for tasks where accuracy matters more than speed.
Claude has a mode called extended thinking where it reasons through a problem in a hidden scratchpad before responding. The response you see is the output of that reasoning process, not a first-pass pattern match. When the toggle is on, Claude visibly shows its reasoning in a collapsible section before the answer. Two things make extended thinking work better. First, the prompt needs to be genuinely complex. Simple questions get a short think regardless. Second, including explicit reasoning constraints in your prompt makes the thinking deeper.
Claude Projects is the most underused feature in the entire AI landscape. Create a project, upload documents, write a system prompt, and every conversation in that project starts with full context. Claude knows your business, preferences, constraints, and goals before you say a word. For founders, consultants, and professionals with complex ongoing work, this feature alone is worth the Claude Pro subscription.
Your role and background. Your company and what it does. Your primary goals right now. Your communication preferences. What you want Claude to always do and never do. Key facts about your situation that would take 10 minutes to re-explain every session. Update this monthly. It is your permanent AI memory.
Gemini is not trying to be ChatGPT. It is built for a different job. Once you understand that job, it becomes remarkably powerful.
Gemini is Google's model and it is architected around Google's core advantage which is access to real-time information and deep integration with the Google product ecosystem. Where ChatGPT and Claude work primarily from their training data, Gemini can access current web information and connect to your Google Workspace in ways the other models cannot.
Gemini 1.5 Pro can process an entire PDF, video, audio file, or image alongside your text prompt in a single request. Upload a 50-slide competitor deck and ask "What is their positioning strategy and where are the gaps?" Or upload a meeting recording and ask "What were the three most important decisions made and who is responsible for each?" No other model handles this combination as well.
Gemini has a feature called Google Search grounding which connects its responses to live search results. When this is on, Gemini's answers are anchored to current information rather than training data. This is critical for market research, competitive analysis, recent news, or anything time-sensitive. Without grounding, Gemini behaves like any other model with a knowledge cutoff. With grounding, it behaves like a research analyst with live internet access.
The only model in this guide built primarily around search. Most people use it like a search engine. That leaves most of its capability on the table.
Perplexity sits at the intersection of a search engine and a language model. Every answer it produces is grounded in live web sources which it cites inline. This means you can verify every claim it makes in real time. For professional work where accuracy and attribution matter, this is a critical advantage.
In Pro search mode, Perplexity asks clarifying questions before answering, runs multiple searches across different angles, and synthesises a more comprehensive answer. Always use Pro search for anything that matters. The Focus modes restrict search to specific sources: Academic searches Semantic Scholar and academic databases. Reddit Focus searches Reddit discussions which is surprisingly useful for understanding what real users actually think about products rather than what companies say about them. News Focus restricts to recent sources only.
Search the same topic three times in Perplexity with three different Focus modes. Academic mode gives you what researchers say. Reddit mode gives you what practitioners and users say. News mode gives you what is happening right now. The gaps and contradictions between these three answers are often where the most interesting insights live.
Every AI company offers multiple model versions. Knowing which one to use for which task is as important as knowing how to prompt.
When you open ChatGPT or Claude, you are not just choosing a product. You are choosing from a family of models, each built for a different speed and intelligence tradeoff. Picking the wrong model for a task is like bringing a sledgehammer to a task that needs a scalpel. Understanding this changes how you work.
Start with the standard model for each tool. Switch to the more powerful version when you hit a wall. For Claude this means starting with Sonnet and switching to Opus when the reasoning needs to go deeper. For ChatGPT it means using GPT-4o for most things and switching to o3 when you need genuine logic. For Gemini it means using Pro when the document is large or multimodal. Knowing when to upgrade your model is a skill in itself.
This is the chapter that could save your professional reputation.
Every time you paste text into a free AI tool, that text is potentially stored, reviewed, and in some configurations used to train future versions of the model. For professional tasks involving client information, financial data, strategic plans, or personnel matters, this is a serious risk that most organisations have not adequately addressed.
The most practical way to use AI safely with sensitive material is to strip all identifying information before pasting and replace it with generic placeholders. The AI does not need to know the client's name to help you structure a proposal.
| Platform | Where to Go | Toggle Off |
|---|---|---|
| ChatGPT | Settings → Data Controls | "Improve the model for everyone" |
| Claude | Settings → Privacy | Data usage for training toggle |
| Gemini | Google Account → Data and Privacy → Web and App Activity | Gemini Apps Activity |
| Perplexity | Settings → Account | Data collection preferences |
ChatGPT Enterprise, Claude for Enterprise, and Gemini Workspace for Business all contractually guarantee that your data is not used for training. Zero exceptions. This is the single most important upgrade for any organisation handling client or financial data in AI tools. When you pitch AI upskilling to corporate clients, this is the data privacy answer they need before they say yes.
Copy-paste prompt frameworks for 12 professional scenarios. Fill in the brackets. Press enter.
"The AI is the same for everyone. The setup is what separates the results."
— Prashant Shinde
This guide is part of the Careers Skills AI Agility programme which takes organisations from AI aware to AI capable to AI led. If you want to bring this capability to your enterprise, reach out directly.
Partner With Us