Skip to main content

Engineers Playbook

This playbook helps you maximize your productivity when using AI coding assistants like Claude Code, monitor your personal performance, and optimize your tool usage.

Quick Start for Engineers

1

Install A24Z

npm install -g a24z
a24z login
a24z install claude-code
2

Use Your AI Tools Normally

Work on your tasks as usual - A24Z captures everything automatically
3

Review Your Metrics

Check app.a24z.ai to see your performance
4

Optimize

Use insights to improve your prompts and workflows

Key Metrics to Track

1. Tool Success Rate

What it is: Percentage of tool executions that complete successfully without errors. Why it matters: Low success rates indicate you might be using tools incorrectly or encountering bugs. How to improve:
  • Review failed tool executions to understand why they failed
  • Refine your prompts to be more specific
  • Check if you’re using the right tool for the task
Target: Aim for >90% success rate

2. Average Execution Time

What it is: How long tools take to execute on average. Why it matters: Slow execution times can bottleneck your workflow. How to improve:
  • Break large tasks into smaller, focused requests
  • Use more specific prompts that require less context
  • Avoid redundant tool calls
Target: Under 5 seconds for most tools

3. Token Usage

What it is: Number of input and output tokens consumed per session. Why it matters: Higher token usage = higher costs and potentially slower responses. How to improve:
  • Be concise in your prompts
  • Remove unnecessary context
  • Use file references instead of pasting large code blocks
  • Clear conversation history when starting new tasks
Target: Minimize while maintaining quality

4. Most Used Tools

What it is: Which tools you use most frequently. Why it matters: Understanding your tool usage helps you learn shortcuts and optimize workflows. Action items:
  • Learn keyboard shortcuts for your most-used tools
  • Create templates for repetitive tasks
  • Identify if you’re overusing certain tools

5. Session Duration

What it is: How long your AI coding sessions last. Why it matters: Long sessions might indicate context overload or complexity issues. How to improve:
  • Break complex tasks into smaller sessions
  • Start fresh sessions for new features/bugs
  • Use session summaries to maintain context
Target: 15-30 minutes per focused session

Common Goals and How to Achieve Them

Goal: Improve Code Quality

Monitor which tools succeed vs. fail when writing code.Dashboard filter: EventType = tool_executionWhat to look for: Low success rates on code generation toolsAction: Refine prompts to be more specific about requirements
Identify common errors in generated code.Dashboard view: Failed tool executions with error logsWhat to look for: Repeated syntax errors, import issues, type errorsAction: Provide better context about your codebase structure
Look at successful vs. unsuccessful coding sessions.Metric: Session success rate and commit frequencyWhat to look for: What makes successful sessions differentAction: Replicate successful patterns

Goal: Increase Productivity

Current metric: Average tool execution timeOptimization:
  • Use focused, specific prompts
  • Provide only necessary context
  • Break large requests into smaller chunks
Measure improvement: Track week-over-week execution time reduction
Current metric: Tokens per sessionOptimization:
  • Remove boilerplate from prompts
  • Use references instead of copying code
  • Clear context when switching tasks
Measure improvement: Tokens per completed task
Current metric: Tool usage distributionOptimization:
  • Learn which tools work best for each task type
  • Use specialized tools instead of generic ones
  • Create custom tool chains for common workflows
Measure improvement: Task completion time

Goal: Learn and Improve

Weekly Review:
  • What tools did you use most?
  • What was your success rate trend?
  • Which sessions were most productive?
Monthly Review:
  • How has your tool usage evolved?
  • Are you getting faster?
  • What new tools have you adopted?
A/B Testing:
  • Try different prompt styles for the same task
  • Compare success rates and execution times
  • Document what works best
Keep a Prompt Library:
  • Save successful prompts
  • Share with teammates
  • Iterate and improve
Benchmarking:
  • How does your success rate compare to team average?
  • Are you using tools similarly to peers?
  • What can you learn from top performers?

Daily Workflow

Morning Review (5 minutes)

1. Open A24Z dashboard
2. Check yesterday's metrics:
   - Tool success rate
   - Token usage
   - Most used tools
3. Review any failed executions
4. Set goals for today

During Coding

1. Use AI tools as normal
2. Pay attention to tool feedback
3. Refine prompts if tools fail
4. Keep sessions focused (15-30 min)

End of Day Review (10 minutes)

1. Review today's sessions
2. Note successful patterns
3. Document any learnings
4. Plan improvements for tomorrow

Best Practices

Prompt Engineering

✅ Do

  • Be specific and concise
  • Provide necessary context
  • Use structured formats
  • Reference files by path

❌ Don't

  • Include unnecessary context
  • Paste entire files
  • Use vague descriptions
  • Assume AI knows your codebase

Session Management

Start New Sessions When:
  • Switching to a different feature/bug
  • Context gets too large (>10 interactions)
  • Changing programming languages
  • Moving to a different part of the codebase
Keep Same Session When:
  • Iterating on the same feature
  • Making related changes
  • Context is still relevant

Tool Selection

Task TypeRecommended ToolsWhy
Code generationwrite_file, createDirect, efficient
Debuggingview, bashGood for investigation
Refactoringstr_replaceSurgical changes
TestingbashRun tests directly
Researchview, search_codeExplore codebase

Red Flags to Watch For

Watch out for these warning signs:

🚨 Declining Success Rate

Warning sign: Success rate dropping week-over-week Possible causes:
  • Increasingly complex tasks
  • Prompts becoming less effective
  • Tool limitations
Action:
  • Review recent failures
  • Simplify prompts
  • Break tasks into smaller pieces

🚨 Increasing Token Usage

Warning sign: Token usage growing without increased output Possible causes:
  • Context bloat
  • Repetitive information
  • Inefficient prompts
Action:
  • Clear session history more often
  • Remove unnecessary context
  • Use file references

🚨 High Tool Failure Rate

Warning sign: >20% of tool executions failing Possible causes:
  • Incorrect tool usage
  • Bugs in AI tool
  • Environment issues
Action:
  • Read tool documentation
  • Check error messages
  • Verify environment setup

Troubleshooting Common Issues

Issue: Low Success Rate

1

Identify Failed Tools

Filter dashboard by failed executions
2

Review Error Messages

Look for patterns in failure reasons
3

Refine Prompts

Make prompts more specific and focused
4

Test Changes

Try similar tasks with improved prompts

Issue: Slow Tool Execution

1

Check Average Times

Compare against baseline (usually <5s)
2

Identify Slow Tools

Find which specific tools are slow
3

Reduce Context

Provide only necessary information
4

Break Up Requests

Split large tasks into smaller ones

Advanced Tips

Measuring Your Personal ROI

Track your own productivity gains: Weekly Self-Assessment:
# Week of [Date]

## Time Saved
- Tasks that used to take X hours now take Y hours
- Estimated time saved: [X-Y] hours
- Value at $75/hour: $[amount]

## Quality Improvements
- Bugs caught by AI before commit: [count]
- Code reviews with fewer issues: [count]
- Test coverage increase: [percentage]

## New Skills Learned
- [Skill/pattern 1]
- [Skill/pattern 2]
- [Skill/pattern 3]

## ROI Calculation
- Cost this week: $[amount]
- Value generated: $[amount]
- ROI: [ratio]x

First-Time Success Rate Improvement

Focus on getting things right the first time: Before Each Task:
  1. Review similar successful prompts
  2. Gather all necessary context
  3. Be specific about requirements
  4. Include examples when helpful
Track Your Progress:
WeekFirst-Time SuccessImprovement
165%Baseline
272%+7%
378%+13%
485%+20%
Target: >80% first-time success rate

Context Efficiency

Get better results with less input: Optimize Your Prompts:
  • ✅ Use file references instead of pasting code
  • ✅ Remove boilerplate and comments
  • ✅ Focus on what’s changed/needed
  • ✅ Clear session history regularly
Measure Efficiency:
Efficiency Score = Quality of Output / Input Tokens Used

Track this weekly:
Week 1: 0.65
Week 2: 0.73 (+12%)
Week 3: 0.82 (+26%)

Create Custom Workflows

Document your most effective workflows: Example: Feature Development Workflow
# Feature Development with AI

## Phase 1: Research (Session 1)
**Goal:** Understand existing patterns

**Prompts:**
1. "Show me similar features in the codebase"
2. "Explain the architecture for [area]"
3. "What are the testing patterns we use?"

**Duration:** 15-20 minutes
**Success Rate:** Track in A24Z

## Phase 2: Implementation (Session 2)
**Goal:** Build the feature

**Prompts:**
1. "Create [component] following our patterns"
2. "Add error handling consistent with our approach"
3. "Generate tests based on our conventions"

**Duration:** 30-45 minutes
**Success Rate:** Track in A24Z

## Phase 3: Refinement (Session 3)
**Goal:** Polish and optimize

**Prompts:**
1. "Refactor [code] to improve readability"
2. "Add edge case handling for [scenario]"
3. "Optimize performance for [operation]"

**Duration:** 15-30 minutes
**Success Rate:** Track in A24Z

## Results Tracking
- Total time: [X] hours
- Code quality: [metrics]
- Test coverage: [percentage]
- Review feedback: [count]

Track Your Progress

Keep a weekly journal:
# Week of Jan 1-7

## Metrics
- Success rate: 94% (↑2% from last week)
- Avg execution time: 3.2s (↓0.5s)
- Sessions: 23

## Learnings
- Focused prompts work better
- Breaking tasks helps with complex features

## Goals for Next Week
- Try new refactoring tools
- Reduce token usage by 10%

Team Collaboration

Sharing Your Success

Help your team improve by sharing what works: In Code Reviews:
## PR Description

### Implementation Approach
Built using AI assistance with the following strategy:

**Prompts used:**
1. "Create React component for [feature]"
2. "Add TypeScript types following our conventions"
3. "Generate tests using Jest and RTL"

**AI Session:** [link to A24Z session]

**Success rate:** 95% (minimal manual fixes needed)

**Notes:** The AI-suggested error handling was particularly good.
Consider adding this pattern to our prompt library.
In Team Channels:
#ai-wins
🎉 Just used AI to refactor our legacy auth system in 2 hours 
instead of the usual 2 days!

Key prompt: "Refactor [code] to use modern async/await patterns 
while maintaining backward compatibility"

Success rate: 90%
Time saved: 14 hours
Link: [A24Z session]

Contributing to Prompt Library

Make it easy for teammates to replicate your success: Good Prompt Template:
# [Task Name]

## Success Rate: 92%
## Times Used: 15
## Contributors: @yourname

## Prompt
[Your exact prompt with placeholders for variables]

## Context Needed
- [What context to provide]
- [What to avoid including]

## Common Pitfalls
- [Issue 1 and how to avoid]
- [Issue 2 and how to avoid]

## Example Usage
[Real example with before/after code snippets]

Learning from Teammates

Weekly Review:
  1. Check team’s top prompts in shared library
  2. Try 2-3 new patterns from teammates
  3. Compare results to your usual approach
  4. Share feedback on what worked
Pairing Sessions:
  • Schedule 30-min pairing with high-performers
  • Watch how they craft prompts
  • Learn their workflow patterns
  • Share your own techniques

Benchmarking Against Team

Use team metrics to improve:
MetricYouTeam AvgTop PerformerGap
Success Rate87%89%94%-7%
Execution Time4.2s3.8s2.9s+45%
Cost/Session$2.10$1.85$1.50+40%
Action Plan:
  • Learn from top performer’s prompt library
  • Optimize context to reduce execution time
  • Focus on first-time success to reduce costs

Next Steps