Performance Profiling
Debug performance issues using Go pprof.
Performance Profiling
Section titled “Performance Profiling”Alita Robot supports Go’s pprof profiling tool for diagnosing performance bottlenecks. This guide covers how to enable and use profiling endpoints.
Enabling Profiling
Section titled “Enabling Profiling”Environment Variable
Section titled “Environment Variable”# Enable pprof endpoints (development only!)ENABLE_PPROF=trueWhen enabled, the following endpoints become available:
| Endpoint | Description |
|---|---|
/debug/pprof/ | Index of available profiles |
/debug/pprof/heap | Heap memory profile |
/debug/pprof/goroutine | Goroutine stack trace |
/debug/pprof/threadcreate | Thread creation profile |
/debug/pprof/block | Block (goroutine blocking) profile |
/debug/pprof/mutex | Mutex contention profile |
CPU Profiling
Section titled “CPU Profiling”CPU profiling requires a separate request:
# Collect 30 seconds of CPU profilecurl -o cpu.pprof http://localhost:8080/debug/pprof/profile?seconds=30Using pprof
Section titled “Using pprof”Interactive Analysis
Section titled “Interactive Analysis”Start the pprof interactive console:
go tool pprof http://localhost:8080/debug/pprof/heapCommon commands in pprof:
| Command | Description |
|---|---|
top | Show top functions by resource usage |
web | Open visual graph in browser |
list funcname | Show source for specific function |
traces | Print all sample traces |
Examples
Section titled “Examples”Option 1: Web UI Mode
Section titled “Option 1: Web UI Mode”# Open web UI at http://localhost:8081go tool pprof -http=:8081 http://localhost:8080/debug/pprof/heapUse the web interface to explore the profile visually.
Option 2: Interactive Console
Section titled “Option 2: Interactive Console”# Drop into interactive consolego tool pprof http://localhost:8080/debug/pprof/heap
# Then run commands like:(pprof) top(pprof) web(pprof) list functionnameGoroutine Analysis
Section titled “Goroutine Analysis”# Get goroutine dump in console modego tool pprof http://localhost:8080/debug/pprof/goroutine
# Check for goroutine leaks(pprof) topCPU Profiling
Section titled “CPU Profiling”# Collect 30 seconds of CPU profile# Note: The server has a 10s WriteTimeout - use shorter duration or profile externallygo tool pprof -seconds=30 http://localhost:8080/debug/pprof/profile
## Flame Graphs
Flame graphs provide a visual representation of CPU or memory usage.
### Option 1: go tool pprof (Recommended)
The simplest way to generate flame graphs:
```bash# Generate SVG flame graph from heap profilego tool pprof -svg -output=heap-flamegraph.svg http://localhost:8080/debug/pprof/heap
# Generate SVG flame graph from CPU profile (30 seconds)go tool pprof -svg -output=cpu-flamegraph.svg http://localhost:8080/debug/pprof/profile?seconds=30
# Or open in browser directlygo tool pprof -http=:8081 http://localhost:8080/debug/pprof/heapOption 2: FlameGraph Perl Scripts
Section titled “Option 2: FlameGraph Perl Scripts”For more control, use Brendan Gregg’s FlameGraph tools:
# Clone the FlameGraph repositorygit clone https://github.com/brendangregg/FlameGraph.gitcd FlameGraph
# Generate heap flame graph from pprof# First, get the profile as raw protobufcurl -s http://localhost:8080/debug/pprof/heap > heap.pb
# Convert to SVG using go tool pprof to export folded stacksgo tool pprof -proto -output=heap.folded ./your-binary heap.pb
# Generate flame graph./flamegraph.pl heap.folded > heap-flamegraph.svgCommon Performance Issues
Section titled “Common Performance Issues”High Memory Usage
Section titled “High Memory Usage”- Collect heap profile during peak usage
- Look for objects that shouldn’t be retained
- Check for unbounded caches or slices
Goroutine Leaks
Section titled “Goroutine Leaks”- Compare goroutine profiles over time
- Look for goroutines waiting on channels
- Check for missing context cancellations
CPU Spikes
Section titled “CPU Spikes”- Collect CPU profile during spike
- Identify hot code paths
- Look for busy loops or excessive locking
Production Alternatives
Section titled “Production Alternatives”For production monitoring without pprof:
- Use Prometheus metrics for observability
- Enable
ENABLE_PERFORMANCE_MONITORINGfor auto-remediation - Monitor
/metricsendpoint for custom metrics - Use external APM tools (Datadog, New Relic)
Troubleshooting
Section titled “Troubleshooting”Profile is Empty
Section titled “Profile is Empty”- Ensure traffic is hitting the bot during collection
- CPU profiles require active processing
Connection Refused
Section titled “Connection Refused”- Verify
ENABLE_PPROF=trueis set - Check bot is running and port is correct
- Ensure firewall allows access to pprof port