Skip to content

Performance Guidelines

To ensure the best possible experience for players, minimizing latency and ensuring system stability is crucial. Below are the recommended high-performance patterns and architectural best practices for integrating with GPAS.

Recommendation: Avoid database hits for every request.

Validating sessions or querying static data by hitting the primary database (e.g., SQL) on every transaction adds significant latency. We strongly recommend implementing an in-memory caching layer (such as Redis or Memcached) to handle:

  • Session Validation: Validate the AuthToken and walletSessionId against cache.
  • Static Data: Cache game lists, currency configurations, and provider settings.

This approach significantly reduces I/O wait times and protects your primary database from high load spikes.

Recommendation: Use dedicated resources and minimize “Cold Starts”.

Where possible, use servers with dedicated resources rather than shared hosting environments. This ensures stable CPU performance and avoids latency spikes caused by “noisy neighbors” in shared environments.

If using Serverless architectures (e.g., AWS Lambda, Azure Functions, Google Cloud Functions):

  • It is crucial to minimize initialization overhead.
  • Keep processes “warm” (Provisioned Concurrency) to avoid the 2-3 second delay of a full stack boot-up on the first request.

Recommendation: Keep the critical path clean.

The logic within the transactional endpoints (Debit, Credit, Rollback) should be as simple and fast as possible. Auxiliary tasks should not block the response to the player:

  • Async Logging: Write logs to disk or external services asynchronously.
  • Async Metrics: Collect APM (Application Performance Monitoring) metrics in the background.
  • Fire-and-Forget: Any task not strictly required to validate the balance or perform the transaction should be offloaded.

Recommendation: Enable HTTP/2.

If your infrastructure supports it, enabling HTTP/2 greatly improves efficiency in handling concurrent connections compared to HTTP/1.1, reducing overhead and latency under high load.