🗂️ Context Manipulation
Achieving infinite memory in finite context windows
The Problem
Section titled “The Problem”AI has context limits:
- Claude: ~200K tokens
- ChatGPT: ~128K tokens
- Others: even less
Large projects blow through this instantly. A medium codebase? 500K+ tokens easy.
You hit the wall. AI forgets. Progress stops.
The Solution
Section titled “The Solution”Context manipulation: Achieving 95-97% effective compression without losing critical information.
Not by upgrading models. By working smarter with context.
Core Techniques
Section titled “Core Techniques”1. Hierarchical Context Compression
Section titled “1. Hierarchical Context Compression”Layer your context like an onion:
LAYER_0: Meta-context (what are we doing overall?)LAYER_1: Structure (how is this organized?)LAYER_2: Active work (what are we doing RIGHT NOW?)LAYER_3: Details (specifics for current task)Example:
@ai CONTEXT_LAYERS:
META: Refactoring authentication systemSTRUCTURE: src/auth/ (5 files), tests/auth/ (3 files)ACTIVE: Working on src/auth/login.tsDETAIL: Lines 42-67, fixing token expiry check
Current issue: [specific problem]Result:
- Traditional: Paste entire 500-line file (3,000+ tokens)
- Hierarchical: Reference + specific section (150 tokens)
- Compression: 95%
2. Context Anchoring
Section titled “2. Context Anchoring”Define once, reference infinitely.
@ai ANCHOR:AUTH_SYSTEM
System: JWT-based authenticationStack: Node.js + Express + PostgreSQLFiles: - src/auth/login.ts (main logic) - src/auth/tokens.ts (JWT generation) - src/auth/middleware.ts (route protection)Pattern: All errors throw AuthError classTests: tests/auth/*.test.ts
// Now reference it:@ai Using ANCHOR:AUTH_SYSTEMFix the token refresh race conditionResult:
- Define once: 200 tokens
- Reference forever: 10 tokens each time
- Savings: Massive over repeated conversations
3. Diff-Based Context
Section titled “3. Diff-Based Context”Send changes, not full files.
@ai FILE: src/auth/login.ts
CHANGED_LINES:42: - if (token.expires < Date.now()) {42: + if (token.expires <= Date.now()) {58: + // Add null check59: + if (!user) return null;
Context: Fixing edge case in token expiryResult:
- Full file: 3,000 tokens
- Diff only: 120 tokens
- Compression: 96%
4. Pointer-Based Context
Section titled “4. Pointer-Based Context”Use file paths instead of content.
@ai ISSUE: Authentication failing in production
Relevant files:- src/auth/login.ts:42-67 (token validation)- src/auth/tokens.ts:23 (token generation)- config/jwt.ts:8 (secret key config)
Error occurs at login.ts:58Related change in commit abc123
Investigate and fix.Result:
- Pasting all files: 5,000+ tokens
- Pointers: 100 tokens
- Compression: 98%
AI can ask for specific files if needed. Start minimal.
5. Session State Management
Section titled “5. Session State Management”Explicitly track conversation state.
@ai SESSION_STATE:
TASK: Refactor authenticationPROGRESS: 60% completeCOMPLETED: ✅ Login endpoint refactored ✅ Token generation updated ✅ Tests passingIN_PROGRESS: 🔄 Refresh token logicBLOCKED: ⚠️ Waiting on database migrationNEXT: 📋 Update middleware 📋 Add rate limiting
Current focus: Refresh token implementationResult:
- AI knows exactly where we are
- No need to repeat completed work
- Clear what’s next
6. Semantic Compression Codes
Section titled “6. Semantic Compression Codes”Create shorthand for complex patterns.
@ai CODES:
REFACTOR_PATTERN_1: Extract to function, add types, write testERROR_PATTERN: Try-catch, log to service, return Result typeAPI_PATTERN: Validate input, check auth, call service, return JSON
// Usage:@ai Apply REFACTOR_PATTERN_1 to calculateTotal() function@ai Add ERROR_PATTERN to all database queriesResult:
- Define patterns once
- Reference in 2-3 tokens
- AI knows exactly what you mean
7. Conditional Context Loading
Section titled “7. Conditional Context Loading”Load context only when needed.
@ai I'm working on authentication
// AI asks: "Do you need context for:"// - JWT implementation?// - Database schema?// - API endpoints?// - Test patterns?
// You answer:Just JWT implementation and tests
// AI loads ONLY those contextsResult:
- Don’t load what you don’t need
- AI asks for more if required
- Stay under limits
8. Context Stack (Push/Pop)
Section titled “8. Context Stack (Push/Pop)”Manage context like a stack.
@ai PUSH_CONTEXT: Refactoring login.ts
[Work on login.ts]
@ai POP_CONTEXT@ai PUSH_CONTEXT: Updating tests
[Work on tests]
@ai POP_CONTEXTBack to overall refactoringResult:
- Deep work without losing place
- Return to previous state cleanly
- Organized context management
9. External Memory Files
Section titled “9. External Memory Files”Store context outside conversation.
Create a context.md file:
# Project Context
## Architecture- Microservices: auth, users, billing- Tech: Node.js + PostgreSQL + Redis- Deploy: Kubernetes on AWS
## Current WorkWorking on: Authentication refactorBranch: feature/auth-v2Status: 60% complete
## Key Files- src/auth/login.ts - Main login logic- src/auth/tokens.ts - JWT handling- tests/auth/ - Test suiteUsage:
@ai Read context.md, then help me fix the token refresh bugResult:
- Context persists across sessions
- Update once, use everywhere
- AI reads from file, not conversation
10. Lossy Compression
Section titled “10. Lossy Compression”Keep essence, regenerate details.
@ai COMPRESSED_CONTEXT:
Auth system broken in production- Error: "Token invalid"- Happens: 5% of logins- Started: After deploy xyz- Suspect: Token expiry logic
Fix it.
// AI can ask for:// - Full error logs?// - Relevant code?// - Recent commits?
// You provide only what AI requestsResult:
- Start with minimum
- Add details on demand
- Never over-context
Real-World Example
Section titled “Real-World Example”Scenario: Debugging complex issue in 100K+ line codebase
Traditional Approach:
Section titled “Traditional Approach:”@ai Here's the entire codebase [paste]ERROR: Context limit exceededContext Manipulation Approach:
Section titled “Context Manipulation Approach:”@ai ANCHOR:PROJECT_CONTEXT- Codebase: E-commerce platform- Stack: React + Node.js + PostgreSQL- Scale: 100K LOC, 500+ files- Issue: Checkout failing for 5% of users
ACTIVE_INVESTIGATION:- File: src/checkout/payment.ts:145- Error: "Cannot read property 'id' of undefined"- Started: 2 days ago after deploy- Affected: Users with saved cards
RELEVANT_FILES (pointer-based):- src/checkout/payment.ts:140-160- src/models/PaymentMethod.ts:23-45- src/services/stripe.ts:67
HYPOTHESIS:Saved cards missing 'id' field in some edge case
DIFF from last working version:payment.ts:145- const cardId = card.id+ const cardId = card?.stripeId || card.id
Help me debug and fix.Token usage:
- Traditional: CRASH (over limit)
- Optimized: ~300 tokens
- Effective compression: 99%+
The Workflow
Section titled “The Workflow”- Define anchors for major systems/contexts
- Use pointers instead of pasting code
- Send diffs for changes
- Layer context (meta → structure → active → detail)
- Track session state explicitly
- Load conditionally - only what’s needed
- Store externally -
context.mdfiles - Compress lossily - essence first, details on demand
Best Practices
Section titled “Best Practices”✅ DO:
- Start with minimum context
- Add detail only when AI asks
- Use file:line references
- Create anchors for recurring contexts
- Update context.md files
- Think in layers
❌ DON’T:
- Paste entire files unless required
- Repeat context AI already has
- Front-load every possible detail
- Ignore context structure
Measuring Success
Section titled “Measuring Success”Before context manipulation:
- Hit limits constantly
- Restart conversations frequently
- Lose progress
- Frustrated
After context manipulation:
- Rarely hit limits
- Conversations last indefinitely
- Continuous progress
- Productive
Compression ratios:
- Good: 80-90%
- Great: 90-95%
- Legendary: 95-99%
Tools That Help
Section titled “Tools That Help”Manual:
context.mdfiles in your project- Anchor definitions at conversation start
- Explicit state tracking
Future (coming soon):
- IDE extensions for auto-compression
- Context managers
- Automatic anchor generation
Next Level
Section titled “Next Level”Combine with:
- Emoji Protocol - Compress even further
- Meta-Prompting - Generate optimal context structures
- Quantum Prompting - Context in superposition
Practice Exercise
Section titled “Practice Exercise”Try this:
- Take a complex project you’re working on
- Create a
context.mdwith project overview - Define 3 ANCHOR points for major systems
- Next AI conversation: Use anchors + pointers only
- Measure token savings
Goal: 90%+ compression on first try
“The best context is the minimum context that gets maximum results.”
Master context manipulation. Work in codebases 10x larger than your context window. 🗂️