Performance Tuning
Optimizing a high-performance cluster requires understanding the synergy between the native XHSC engine and your worker processes. Use these guidelines to find the "sweet spot" for your application.
1. Finding the Worker Sweet Spot
The workers setting is the most critical lever for scaling. Avoid over-provisioning, which can lead to excessive context switching.
| Workload | Recommended Count | Logic |
|---|---|---|
| I/O Heavy | Math.max(4, CPU_CORES * 1.5) | DB/API waits allow more workers to fill the gaps. |
| CPU Heavy | CPU_CORES - 1 | Leave one core free for XHSC network handling. |
| General | "auto" | Maps 1 worker to 1 physical thread automatically. |
2. Distribution Tuning
Strategy selection significantly impacts throughput consistency:
- least-connections: Use this if your routes have varying latencies (e.g., 10ms vs 500ms). It prevents "clumping" on busy workers.
- round-robin: Ideal for extremely uniform, high-frequency tasks where tracking connections adds unnecessary overhead.
3. Resilience & Guardrails
Safety settings protect your cluster from "cascading failures" during heavy load or partial worker instability.
If a worker crashes during a request, XHSC can transparently retry it on a different healthy worker.
requestManagement.resilience: {
retryEnabled: true,
maxRetries: 2,
}Stops sending traffic to failing workers, returning 503 quickly to prevent client timeouts.
requestManagement.resilience: {
circuitBreaker: {
enabled: true,
failureThreshold: 5,
},
}The Intelligence Engine
While XyPriss does not yet support dynamic auto-scaling of workercounts, it actively manages the internal resourcesof existing workers.
"The engine proactively signals Garbage Collection (GC) and pre-allocates memory buffers based on historical traffic patterns to maintain stable throughput during spikes."
res.sendFile()to bypass IPC bottlenecks.Learn how XyPriss secures your application at the core level.
