Cluster Configuration
XyPriss clustering is managed by the high-performance Go core (XHSC). All settings reside under the cluster key in your server options.
Basic Cluster Setup
import { createServer } from "xypriss";
const app = createServer({
cluster: {
enabled: true,
workers: "auto", // Spawns 1 worker per physical CPU core
strategy: "least-connections",
resources: {
maxMemory: "1GB",
maxCpu: 80,
},
},
});Core Options
| Property | Type | Default | Description |
|---|---|---|---|
enabled | boolean | false | Enables XHSC-managed process clustering. |
workers | number | "auto" | "auto" | Number of worker nodes to provision. |
strategy | string | "least-connections" | Distribution algorithm to use. |
Distribution Strategies
The XHSC engine supports multiple distribution algorithms to suit different workload profiles:
round-robin
Simple sequential distribution of requests.
least-connections
Sends traffic to the worker with the fewest active requests.
ip-hash
Ensures a client (by IP) always hits the same worker (Sticky Sessions).
latency
Favors workers with the lowest historical response times.
Worker Guardrails
Manage worker health and prevent resource exhaustion with native enforcement:
- maxMemory: Gracefully recycles workers if memory usage exceeds limits (e.g.,
"512MB","2GB"). - maxRequests: Limits the number of requests a worker handles before recycling, mitigating potential memory leaks.
Network Quality Rejection
Protect your cluster from "poison pill" requests or slow clients that might degrade performance for others.
typescript
requestManagement: {
networkQuality: {
enabled: true,
rejectOnPoorConnection: true,
maxLatency: 500, // Reject if avg latency > 500ms
minBandwidth: 1024, // Min 1KB/s requirement
},
}Worker Runtimes
If you start the master process with Bun, XyPriss automatically spawns Bun workers. The engine remains agnostic to the underlying runtime while maintaining performance parity.
Performance Tuning
Find the worker sweet spot and optimize your cluster for I/O or CPU-heavy workloads.
