Initial commit: antigravity-claudekit
This commit is contained in:
136
skills/ck-context-engineering/SKILL.md
Normal file
136
skills/ck-context-engineering/SKILL.md
Normal file
@@ -0,0 +1,136 @@
|
||||
---
|
||||
name: ck-context-engineering
|
||||
description: >
|
||||
Optimizes prompts, context windows, and AI interaction patterns for better LLM outputs.
|
||||
Activate when user says 'improve this prompt', 'optimize context', 'prompt engineering',
|
||||
'getting bad AI responses', 'structure prompt better', or 'reduce token usage'.
|
||||
Accepts existing prompts, system instructions, and desired output descriptions.
|
||||
---
|
||||
|
||||
## Overview
|
||||
Applies prompt engineering and context management techniques to improve LLM response quality, reduce hallucinations, and optimize token efficiency across AI-powered workflows.
|
||||
|
||||
## When to Use
|
||||
- Existing prompts produce inconsistent or low-quality outputs
|
||||
- System prompts need restructuring for a new AI-powered feature
|
||||
- Reducing context window usage while maintaining output quality
|
||||
- Designing multi-turn conversation flows
|
||||
- Creating reusable prompt templates for a team or application
|
||||
|
||||
## Don't Use When
|
||||
- The underlying model needs fine-tuning (prompting won't fix model capability gaps)
|
||||
- Issue is API/infrastructure, not prompt quality
|
||||
- Task needs real-time data the model cannot access (use RAG or tool calls instead)
|
||||
|
||||
## Steps / Instructions
|
||||
|
||||
### 1. Diagnose the Current Prompt
|
||||
|
||||
Identify failure modes:
|
||||
- **Hallucination**: model invents facts → add grounding, cite sources
|
||||
- **Verbosity**: too much irrelevant output → add format constraints
|
||||
- **Missed requirements**: model ignores instructions → restructure, use numbered lists
|
||||
- **Inconsistency**: different results each run → lower temperature, add examples
|
||||
- **Context overflow**: long conversation loses earlier instructions → summarize / compress
|
||||
|
||||
### 2. Apply Prompt Structure Principles
|
||||
|
||||
**Clear role definition:**
|
||||
```
|
||||
You are a senior TypeScript developer specializing in Next.js.
|
||||
Your responses are concise, production-ready, and include error handling.
|
||||
```
|
||||
|
||||
**Explicit output format:**
|
||||
```
|
||||
Respond ONLY with a JSON object matching this schema:
|
||||
{
|
||||
"summary": string, // 1-2 sentences
|
||||
"steps": string[], // ordered action items
|
||||
"confidence": "high" | "medium" | "low"
|
||||
}
|
||||
Do not include explanatory prose outside the JSON.
|
||||
```
|
||||
|
||||
**Constraints before the ask:**
|
||||
```
|
||||
Rules:
|
||||
- Do not use any third-party libraries
|
||||
- Prefer async/await over callbacks
|
||||
- Keep functions under 20 lines
|
||||
|
||||
Task: Write a function that...
|
||||
```
|
||||
|
||||
### 3. Few-Shot Examples
|
||||
|
||||
Add 2–3 examples for tasks with specific output patterns:
|
||||
```
|
||||
Input: "user clicked buy"
|
||||
Output: { "event": "purchase_initiated", "category": "commerce" }
|
||||
|
||||
Input: "page loaded"
|
||||
Output: { "event": "page_view", "category": "navigation" }
|
||||
|
||||
Input: "{{USER_INPUT}}"
|
||||
Output:
|
||||
```
|
||||
|
||||
### 4. Context Window Management
|
||||
|
||||
**Reduce noise:**
|
||||
- Remove redundant instructions
|
||||
- Trim verbose conversation history
|
||||
- Summarize long documents before including
|
||||
|
||||
**Chunking strategy for long content:**
|
||||
```
|
||||
For files > 4000 tokens:
|
||||
1. Extract only relevant sections
|
||||
2. Add chunk position metadata: "[Chunk 2/5 of file.ts, lines 120-240]"
|
||||
3. Request only what the model needs for the current step
|
||||
```
|
||||
|
||||
**System vs user message split:**
|
||||
- System: persistent role, rules, output format
|
||||
- User: task-specific content, inputs, context
|
||||
- Keep system prompt stable; vary user message per request
|
||||
|
||||
### 5. Chain-of-Thought for Complex Reasoning
|
||||
|
||||
```
|
||||
Before answering, think step by step:
|
||||
1. Identify what is being asked
|
||||
2. List any constraints or edge cases
|
||||
3. Draft your approach
|
||||
4. Verify your answer against the constraints
|
||||
Then provide your final response.
|
||||
```
|
||||
|
||||
For simpler tasks, suppress COT to save tokens:
|
||||
```
|
||||
Answer directly without showing your reasoning process.
|
||||
```
|
||||
|
||||
### 6. Temperature and Sampling
|
||||
|
||||
| Task Type | Temperature | Notes |
|
||||
|-----------|-------------|-------|
|
||||
| Code generation | 0.0–0.2 | Deterministic, fewer errors |
|
||||
| Classification | 0.0 | Consistent labels |
|
||||
| Creative writing | 0.7–1.0 | More varied output |
|
||||
| Summarization | 0.3–0.5 | Balanced accuracy/fluency |
|
||||
|
||||
### 7. Evaluate and Iterate
|
||||
|
||||
Create an evaluation set:
|
||||
1. 10–20 representative inputs with expected outputs
|
||||
2. Run prompt variant against all inputs
|
||||
3. Score outputs on accuracy, format compliance, conciseness
|
||||
4. A/B test two prompt versions before deploying
|
||||
|
||||
## Notes
|
||||
- Prompt changes can have non-obvious side effects; always test on a representative sample
|
||||
- Document prompt versions with rationale (treat prompts like code)
|
||||
- Use delimiters (`"""`, `---`, `<tags>`) to separate instructions from user content
|
||||
- Model context limits change — design prompts that degrade gracefully when truncated
|
||||
Reference in New Issue
Block a user