• Startup Spells 🪄
  • Posts
  • Prompt Engineering Tips for Claude AI: 5 Expert Strategies from Anthropic

Prompt Engineering Tips for Claude AI: 5 Expert Strategies from Anthropic

PLUS: Artificial Intelligence Trends (May 2025)

Prompt Engineering Tips for Claude AI: 5 Expert Strategies from Anthropic

Effective communication is essential for getting the best results from large language models (LLMs) like ChatGPT, Claude, Gemini, Grok, Qwen, and Deepseek.

Alex Albert, a Prompt Engineer at Anthropic, shared five practical strategies for crafting prompts that help Claude perform at its best.

Prompt engineering involves designing your inputs to elicit the most accurate, useful, and contextually relevant outputs from a model. These techniques focus on precision, clarity, and thoughtful guidance to improve model responses.

Below are 5 core prompting techniques used at Anthropic to improve results with Claude:

Tip 1: Describe the Task with Precision

Claude performs best when instructions are specific, detailed, and unambiguous. Vague prompts leave room for misinterpretation, while clear directives improve output quality.

Example Prompt:

I want you to de-identify some text by removing all personally identifiable information from it so that it can be shared safely with external contractors.

Here is the text you should process:

"Alex is a Prompt Engineer at Anthropic. He can be reached at 925 123-4567 or [email protected]"

How This Helps Claude: By clearly stating the task and the goal (de-identification), Claude can align its output with the intended result. Explicit definitions reduce guesswork, especially around terms like "personally identifiable information."

Tip 2: Use XML Tags to Structure Prompts

Claude has been trained to recognize and respond to XML-style tags. These tags act like signposts, helping the model separate instructions, examples, and inputs more effectively.

Example Prompt:

I want you to de-identify some text by removing all personally identifiable information from it so that it can be shared safely with external contractors.

**PII includes names, phone numbers, home addresses, and email addresses. Please replace all instances of PII with a singular "XXX".**

Here is the text you should process:
<text>
"Alex is a Prompt Engineer at Anthropic. He can be reached at 925 123-4567 or [email protected]"
</text>

How This Helps Claude: Tags like <text> signal which portion of the prompt contains content to be processed. This reduces confusion between the task instructions and the target input, leading to cleaner, more reliable responses. Notice the word text used in the first sentence is a pointer to the <text> tag used later so the model isn't confused.

Tip 3: Include Examples to Clarify Expectations

Showing Claude a clear input-output example makes a significant difference—especially for nuanced or stylistic tasks. It’s a way to demonstrate the pattern rather than describe it abstractly.

Example Prompt:

I want you to de-identify some text by removing all personally identifiable information from it so that it can be shared safely with external contractors.
PII includes names, phone numbers, home addresses, and email addresses. Please replace all instances of PII with a singular "XXX".

Here is an example:
<example>
The de-identified version of "**Alex is a cardiologist at Mercy Health Medical Center.** He can be reached at 925-123-4567 or [email protected]." would be "XXXX is a cardiologist at Mercy Health Medical Center. He can be reached at XXX-XXX-XXXX or XXX@XXX."
</example>

Here is the text you should process:
<text>
"Alex is a Prompt Engineer at Anthropic. He can be reached at 925 123-4567 or [email protected]"
</text>

How This Helps Claude: Examples offer a concrete reference for formatting, tone, and transformation logic. This makes it easier for Claude to generalize the rule and follow it across new inputs.

Tip 4: Take Advantage of Long Context

Claude supports a large context window—up to 100,000 tokens, or about 70,000 words. That means entire books, research papers, or datasets can be included in the prompt for analysis, summarization, or transformation.

Example Prompt:

Summarize the highlights in this book, then draft me 5 essay questions based on the economic conditions during the time of the story.
(Attached file: Great Gatsby.txt)

How This Helps Claude: Using long-context inputs allows users to supply complete information upfront, which improves Claude’s ability to reference, reason about, and generate grounded outputs. This removes the need to chunk inputs or rely on memory across prompts.

Tip 5: Encourage Internal Reasoning with “Thinking” Tags

For complex tasks, prompting Claude to "think first" improves the quality of its final answer. This technique, known as chain-of-thought prompting, helps the model walk through its reasoning before producing a result.

Example Prompt:

When you reply, first plan how you should answer within <thinking> </thinking> XML tags. This is a space for you to write down relevant content and will not be shown to the user.
Once you are done thinking, output your final answer to the user within <answer> </answer> XML tags. Make sure your answer is detailed and specific.

Here is the user question:
<question> How many paperclips can I fit in a 1967 Volkswagen Beetle? </question>

Example Response from Claude:

<thinking>
To answer this question, I need to first estimate the interior volume of a 1967 Volkswagen Beetle, and then calculate how many paperclips could fit into that space. Here are the steps I will take:
1. Research the typical interior dimensions of a 1967 VW Beetle to get approximate length, width and height.
2. Use those dimensions to calculate the total interior volume in cubic meters.
3. Look up the typical dimensions of a standard paperclip to estimate its volume.
4. Divide the total interior volume by the volume per paperclip to get an estimate of the maximum number of paperclips that would fit.
I'll make some reasonable assumptions where needed and show my work so the user can verify my logic.
</thinking>

<answer>
Here is how I would estimate the number of paperclips that can fit in a 1967 Volkswagen Beetle:

According to my research, a typical 1967 VW Beetle has an interior length of approximately 4 meters, a width of 1.6 meters, and a height of 1.35 meters. Multiplying those dimensions gives an estimated total interior volume of 8.64 cubic meters.

Assuming standard #1 size paperclips are approximately 1 inch long and 0.1 inches thick, I estimate each paperclip has a volume of approximately 2.5 cubic centimeters or 0.0000025 cubic meters.

Dividing the total interior volume by the volume per paperclip gives:
8.64 cubic meters / 0.0000025 cubic meters per paperclip = 3,456,000 paperclips

Therefore, I estimate that a 1967 Volkswagen Beetle can hold approximately 3.5 million standard #1 size paperclips. Let me know if you need me to show or explain any of the steps in my work!
</answer>

How This Helps Claude: By allowing the model to lay out its thought process step by step, the final response is typically more accurate, transparent, and logically sound. This is particularly useful in scenarios involving estimation, multi-step reasoning, or subjective judgment.

By following these five techniques—clear task descriptions, structured inputs, example-based guidance, full-context usage, and internal reasoning—users can greatly improve the quality and reliability of Claude’s responses.

These tips reflect Anthropic’s own approach to building prompts that are precise, scalable, and aligned with human intent.

Top Tweets of the day

1/

Anthropic cut off Windsurf's API since it was rumoured to be bought by OpenAI for $3 billion.

Good we have lots of AIs so the 3rd-party risk isn't big enough but still scary to happen to you.

2/

People are trolling him but love this reframe. You can do a lot more work a lot faster than you think.

The rules that we know were made before we had AI and Social Media. The newer rules were never written. We just merely adapted the old ones and became way too slow. Speed is the #1 execution of success. Time from thought to action should be seconds. I still take minutes and hours.

3/

Knowing which model to use for what use-case is a skill. Only way to acquire it is to practice talking with AI models 100x per day.

Rabbit Holes

What’d ya think of today’s newsletter? Hit ‘reply’ and let me know.

Do me a favor and share it in your company's Slack #marketing channel.

First time? Subscribe.

Follow me on X.

More Startup Spells 🪄

  1. How Kat’s Low-Follower TikToks Rack Up a Million Views Weekly Using Aspirational Slideshows (LINK)

  2. Warren Buffett's Ruthless 5-Minute People Filter For Efficient Networking (LINK)

  3. Rezi AI's Growth Strategy: #1 AI Powered Free ATS Resume Builder (LINK)

  4. Peak End Rule (with Real Life Examples) (LINK)

Reply

or to participate.