Contact Free trial Login

Resource Allocation and Performance on Anypoint Runtime Fabric

Before deploying a Mule application to Anypoint Runtime Fabric, determine the correct number of resources to allocate. Determining resource allocation is also important when configuring the internal load balancers on Anypoint Runtime Fabric.

When a Mule application is deployed to Runtime Fabric, the application is deployed with its own Mule runtime engine (Mule). The number of replicas, or instances of that application and runtime, are also specified. The resources available for each replica are determined by the values you set when deploying an application.

You can allocate the following resources when deploying an application:

  • vCPU cores

  • Memory

Minimum Core and Memory Requirements

The minimum values for the amount of CPU and memory allocated to each replica of a Mule application are:

  • vCPU Cores: 0.07 cores

  • Memory: 0.7 GiB memory (Mule 4) or 0.5 GiB (Mule 3.x)

Calculating Memory Allocation on CloudHub and Runtime Fabric

Anypoint Platform allocates native and heap memory for a deployed application. Heap memory is the portion of the total memory that is made available to the Mule runtime and the application. Heap memory is used for tasks like handling payload.

Both CloudHub and Anypoint Runtime Fabric allocate both types of memory. However, they differ in how the memory allocation of each memory type is calculated.

  • Runtime Fabric lists the total memory available for an application.
    The available heap memory is approximately half of the total memory allocated to an application.

  • CloudHub describes minimum memory requirements in terms of the heap memory available to an application.

Application Startup Times

The startup time for a Mule applications is correlated to the number of vCPU cores the application has access to:

vCPU Cores Approximate Startup Time

1.00

Less than 1 minute

0.50

Under 2 minutes

0.10

6 to 8 minutes

0.07

10 to 14 minutes

Application Performance

The resources allocated to your Mule application determine the application’s performance. Following are the approximate values for throughput based on the number of vCPU cores allocated for a single Mule application performing simple processing on a 10-KB payload:

vCPU Cores Concurrent Connections Avg Response Time (ms)

1.00

10

15

0.50

5

15

0.10

1

25

0.07

1

78

Run performance and load testing on your Mule applications to determine the number of resources to allocate.

Internal Load Balancer

Inbound traffic is processed using an internal load balancer managed by Anypoint Runtime Fabric. Because this load balancer is responsible for TLS termination, the number of resources required scales based on the number of incoming connections and the average payload size for each request.

Performance test results are based on an Anypoint Runtime Fabric cluster with controller nodes using AWS EC2 M4 instances. The load generator used in the performance test was hosted on a separate AWS instance in the same region. The M4 instances featured a custom Intel Xeon E5-2676 v3 Haswell processor optimized specifically for EC2, which ran at a base clock rate of 2.4 GHz. Using Intel Turbo Boost increased the clock rate to go as high as 3.0 GHz.

A C++ based load generator with better efficiency in SSL connections was used to yield the maximum throughput.

The following table lists the approximate requests (averaging 10 KB) that can be served with a single replica of the internal load balancer based on the number of CPU cores:

vCPU Cores Max Requests per Second (Connection Re-use) Max Requests per Second (No Connection Re-use)

1.00

2000

175

0.75

1500

100

0.50

1000

50

0.25

100

10

The internal load balancer runs on the controller VMs of Runtime Fabric. Size the VMs based on the amount and type of inbound traffic. You can allocate only half of the available CPU cores on each VM to the internal load balancer.