- Startup Spells 🪄
- Posts
- AI Prompt Compression Slashes LLM Models Costs by 50%
AI Prompt Compression Slashes LLM Models Costs by 50%
PLUS: QWQ could be the BEST local AI model
AI Prompt Compression Slashes LLM Models Costs by 50%
The costs of running large language models can add up quickly especially as the AI Agents become more common.
But you can use prompt compression to cut these costs in half while maintaining full functionality.
Efficient Tokens Preserve Context While Eliminating Bloat
Prompt compression removes redundant words while keeping essential meaning intact. This technique identifies and strips away unnecessary tokens from original prompts - effectively "zipping" down input text.
The process resembles how we use Google search. Just as we've learned to optimize our Google searches, we must learn to optimize our AI prompts.
Consider these alternatives:
Verbose: "What is the Explain like I'm 5 explanation of why the sky is blue?"
Compressed: "ELI5 sky is blue"
Expert Googlers use the compressed version but beginners use the verbose version.
Yet both achieve identical results, but the compressed version costs a fraction to process.
SEO Notebook Tool Transforms Verbose Text Instantly
The SEO Notebook Prompt Compressor built on Gradio shows the power of prompt compression with this example:

Prompt Compressor
Original input (97 tokens):
"John: So, um, I've been thinking about the project, you know, and I believe we need to, uh, make some changes. I mean, we want the project to succeed, right? So, like, I think we should consider maybe revising the timeline. Sarah: I totally agree, John. I mean, we have to be realistic, you know. The timeline is, like, too tight. You know what I mean? We should definitely extend it."

Input Tokenizer
Compressed output (64 tokens):
"John : So, um ' ve been thinking about project, believe we need to, make changes., want project to succeed, right?, like, think should consider maybe revising timeline. Sarah : agree, John., have to be realistic,. timeline is, like, too tight. know what mean? should extend it."
The compression reduced word count from 71 to 45, character count from 385 to 277, and token count from 97 to 64 while preserving the complete meaning.

Output Tokenizer
RooCode System Prompt Compression
The RooCode system prompt compression case study demonstrates this point even further:
Original system prompt: 10,577 tokens
Compressed system prompt: 8,098 tokens

Roocode System Prompt - Prompt Compressor
This reduction creates immediate savings with no loss in functionality. Since Roo Code System Prompt is sent with every request, the savings are compounded.
You can go even further and reduce the system prompt manually to save even more tokens.
RooCode AI Agent: Real-World Coding Example
The most revealing example is a real-world coding task I performed using RooCode, an agentic coding assistant. What appeared to be a simple codebase refactoring turned into a surprisingly expensive operation:
Input tokens: 53.3 million
Output tokens: 266.9k
Total cost: $137.12 for this single task

Roocode Costs
Thankfully, I got $300 worth of credits for free using Vertex AI but this won't always be the case. AI Agents are not pre-determined workflows so they will cost you as much as they can till they find the optimal answer. An error will cost you an arm & a leg.

Vertex AI Free Credits
Agentic Coding Costs Compound For Two Specific Reasons
This unexpected cost explosion happened for 2 distinct reasons:
Thinking Model Premium: Gemini 2.5 Pro charges higher rates for "thinking input tokens" sent to the LLM. These thinking tokens cost substantially more than standard processing tokens because they are sent as input tokens.
Monolithic Task Structure: The entire codebase was processed as one massive task rather than breaking it into smaller components. Each part of the code was sent sequentially within the same context window, forcing the model to process everything together. The solution is to break the task into smaller, discrete subtasks, each with its own context window.
Task Segmentation Creates Dramatic Cost Reduction
Breaking large tasks into smaller context windows cuts costs through better token management. For example, RooCode Boomerang Tasks allows the following:
Breaks a parent coding task into smaller, discrete subtasks
Assigns each subtask a smaller context window
Completes each piece independently
Reports results back to the parent task with summaries
This structured approach prevents token bloat by localizing context to only what's needed for each subtask. This approach reduces costs by 50% or more compared to processing everything in a single context window.
Coding's Deterministic Nature Creates A Model For All AI Applications
The optimization principles proven in coding tasks will transfer to marketing, design, sales and other business functions. Why? Because coding is deterministic - computers execute exactly what they're instructed to do.
This deterministic nature gives coders a significant advantage in building applications across all domains as they adopt agentic AI workflows. The same token optimization strategies will apply regardless of the field.
By applying these compression techniques across all AI interactions, organizations can cut their AI costs in half while maintaining full functionality.
Top Tweets of the day
1/
I KNEW IT!
It takes real energy to move from one “brain state” to another, which explains what I think is one of the most under-appreciated facts in all productivity:
That the mere order in which you complete tasks is profoundly important
A sequence like Task A => Task B =>
— Tiago Forte (@fortelabs)
2:18 AM • Apr 8, 2025
Focus on tasks serially, not parallelly. Your brain doesn't have enough power to do both at the same time.
Similarly, build 1 business. You can build a singular massive business instead of 10 side-projects.
Compounding only take effects when you have something to compound.
2/
Today, arcads.ai reached $5M ARR with a team of 5
we’re on track for $100M with fewer than 10
how?
… internal ai agents (and 10x engineers)
here are the TOP 10 ai agents we use EVERYDAY to keep the team tiny:
— Romain Torres (@rom1trs)
12:00 PM • Apr 9, 2025
Arcads AI is an AI Ads startup. Every single one of these startups using AI are growing exponentially. This is the era of insane leverage. Exploit it before the world catches up.
3/
It's amazing how effective this one sentence prompt is for stopping sonnet 3.7 from overengineering and refactoring your entire codebase
— Ian Nuttall (@iannuttall)
12:15 PM • Apr 8, 2025
Claude 3.7 Sonnet Thinking has a habit of changing stuff that is totally unrelated. Its a good model but does too many unrelated things.
Once again, prompts have alpha. Need to go down a prompt engineering rabbithole and read all the system prompts of great products like Claude, Aider, Claude Code, Roo Code, and more.
Rabbit Holes
What’d ya think of today’s newsletter? Hit ‘reply’ and let me know.
Do me a favor and share it in your company's Slack #marketing channel.
First time? Subscribe.
Follow me on X.
Reply