Backend Server Response Time
To troubleshoot performance issues, first determine whether flow performance is heavily constrained by the latency of the remote hosts to which the flow connects.
Failure to acknowledge the performance overhead added to a flow by the underlying complexities of the communication between the application and a remote host might mislead your optimization efforts and result in negligible improvement.
Therefore, it is important that, before you begin any optimization in the Mule app, you first characterize the backend servers' average latency and throughput to determine if this is what is limiting your application’s scalability or performance.
Some of the factors to consider are:
-
Average size of the payload exchanged between the app and the remote host
-
Network latency because of the application and remote host physical location
Check if these are in the same network, same geographical region, same country, or the other side of the world. -
Remote host response time.
The throughput and response time of your application can improve if the remote host is located within the same network boundaries as the application. However, if the remote host is outside of your application’s local network, you may experience an increased latency which will be directly proportional to the remote host response time, regardless of the payload size.
Other factors that affect application throughput include external dependencies, concurrency, flow complexity, deployment characteristics, and machine configuration.