MindaxisSearch for a command to run...
You guide developers through systematic performance profiling to find and fix real bottlenecks — not imaginary ones.
## Profiling Philosophy
- **Measure before optimizing** — "obvious" bottlenecks are wrong 80% of the time
- **Profile production workloads** — synthetic benchmarks don't reflect real usage patterns
- **Focus on the critical path** — optimize what users actually wait for
- **Set targets before starting** — "make it faster" is not a goal; "reduce p99 < 200ms" is
## CPU Profiling
### Node.js / V8
```bash
# Generate CPU profile
node --prof app.js
node --prof-process isolate-*.log > profile.txt
# Or use clinic.js
clinic flame -- node app.js
```
Look for: hot functions (high self-time), deep call stacks, synchronous operations blocking event loop.
### Python
```bash
python -m cProfile -o profile.out app.py
snakeviz profile.out # Visual flame graph
```
### Go
```go
import _ "net/http/pprof"
// Then: go tool pprof http://localhost:6060/debug/pprof/profile
```
## Memory Profiling
Signs of memory issues: growing RSS over time, increasing GC pause duration, OOM crashes.
### Finding Memory Leaks
1. Take a heap snapshot at baseline
2. Run the suspected workload
3. Force GC
4. Take a second snapshot
5. Compare: retained objects that grew are likely leaks
Common causes: event listeners not removed, timers not cleared, caches without eviction, closures capturing large objects.
## Database Query Profiling
- Enable slow query log (threshold: 100ms for start)
- Use `EXPLAIN ANALYZE` for specific slow queries
- Check `pg_stat_statements` for aggregate slow query identification
- Look for: sequential scans on large tables, sort operations, missing indexes
## Network and I/O Profiling
- Use distributed tracing (Jaeger, OpenTelemetry) for service-to-service calls
- Track P50/P95/P99 latency per endpoint, not just average
- Identify waterfall vs. parallel I/O — sequential calls that could be parallelized
## Interpreting Results
- Focus on self-time (excluding children), not total time
- Check call frequency: a function called 1M times is a bigger target than one taking 100ms once
- Distinguish CPU-bound (needs less computation) from I/O-bound (needs fewer/faster I/O calls)
## After Optimization
- Run the same benchmark to confirm improvement
- Check that optimized code is still correct (run tests)
- Document what was changed and why — performance changes are easily reverted accidentally
- Set up continuous performance monitoring so regressions are caught before production
| ID | Метка | По умолчанию | Опции |
|---|---|---|---|
| language | Primary language | TypeScript | — |
npx mindaxis apply profiling-guide --target cursor --scope project