Contact Us 1-800-596-4880

Clustering

Clustering provides scalability, workload distribution, and added reliability to your applications on CloudHub 2.0. This functionality is powered by CloudHub’s scalable load-balancing service and replica scale-out.

You can enable clustering features on a per-application basis using the Anypoint Runtime Manager console when either deploying a new application or redeploying an existing application.

High Availability and Clustering

High availability refers to having multiple replicas that, unlike clustering, do not share data between each replica. Clustering enables multiple replicas to communicate with each other and share data because they share memory.

CloudHub 2.0 Clustering and Object Store v2

CloudHub 2.0 supports clustering using Mule in addition to Object Store v2 for Mule 4 apps. When you enable Object Store v2 in CloudHub 2.0, note that it is rate limited. For more information, see Object Store v2 Overview.

You do not need to configure your apps to use Object Store in CloudHub 2.0. Additionally, Mule 4 apps support Object Store v2, which can be enabled from Anypoint Runtime Manager console.

Before You Begin

Clustering requires:

  • The Anypoint Integration Advanced package or a Platinum or Titanium subscription to Anypoint Platform

  • A CloudHub Enterprise or Partner account type that enables you to use this feature.

  • Familiarity with deploying applications using Runtime Manager.

Replica Scale-out

CloudHub 2.0 enables you to select vCore options for your application, providing horizontal scalability. This fine-grained control over computing capacity provisioning gives you the flexibility to scale up your application to handle higher loads (or scale down during low-load periods) at any time.

Depending on your subscription, you can deploy your application with up to 8 replicas. To ensure that you have sufficient resources, see CloudHub 2.0 Replicas.

Replica scale-out also adds additional reliability. Mule runtime engine (Mule) automatically distributes multiple replicas for the same application across two or more data centers for maximum reliability.

When deploying your application to two or more replicas, you can distribute workloads across these instances of Mule. CloudHub provides the following an HTTP load balancing service that automatically distributes HTTP requests among your assigned replicas.

Batch jobs only run on a single replica at a time, and cannot be distributed across multiple replicas. If Mule restarts in the same deployment, the status persists and the batch continues processing. If the entire application is updated or redeploying while the batch is running, the rest of the batch job doesn’t continue. The main solution for persistent batch jobs in CloudHub 2.0 is to use Object Store v2 Overview.

Enable Clustering Features

You can enable and disable either or both features of clustering in one of two ways:

  • When you deploy an application to CloudHub 2.0 for the first time using Runtime Manager

  • On the Deployment Target tab in Runtime Manager for a previously deployed application

Use the drop-down menus on the Deployment Target tab to select the vCore options for your application and to configure the computing power that you need.

See CloudHub 2.0 Replicas for more information about deploying to multiple replicas.

If your application is already deployed, you must redeploy it for your new settings to take effect.

How High Availability is Implemented

HTTP load balancing is implemented by an internal reverse proxy server. Requests to the application (domain) URL are automatically load balanced between all the application’s replica URLs.

How Clustering is Implemented

When you enable clustering, all the replicas of a Mule application act as a unit. A cluster is a virtual server composed of multiple replicas. The nodes in a cluster communicate and share information through a distributed shared memory grid. Thus, data is replicated across memory in different replicas.

Use Cases

You can combine clustering features in a single application.

Use Case Suggested Clustering Configuration Implications

You want to scale out your application, but you are satisfied with the existing highly available CloudHub 2.0 architecture in terms of preventing service interruption or message loss.

Number of Replicas: Two or more
Clustering not enabled

  • Application performance is not affected by queue latency.

  • No need to configure your application to support queues.

  • If one data center experiences an outage, your replicas are available in a different data center.

You have a high-stakes process for which you need to protect against message loss, but you are not experiencing issues with handling processing load and are OK with some service interruption in the case of a data center outage.

Number of Replicas: One
Clustering enabled

  • Application might experience some queue latency.

  • You need to configure your application to support queues before deploying.

  • If the data center in which your replica operates experiences an outage, CloudHub 2.0 automatically migrates your application to another availability zone. You might experience downtime during the migration; however, your queues ensure zero message loss.

You have a high-stakes process for which you need to protect against message loss, avoid any chance of service interruption, and handle large processing loads.

Number of Replicas: Two or more
Clustering enabled

  • Application might experience some queue latency.

  • You need to configure your application to support queues before deploying.

  • If one data center experiences an outage, your replicas are automatically distributed to ensure redundancy.

You have an application that does not have any special requirements regarding either processing load or message loss.

Number of Replicas: One
Clustering not enabled

  • Application performance is not affected by queue latency.

  • No need to configure your application to support queues.

  • If the data center in which your replica operates experiences an outage, CloudHub 2.0 automatically migrates your application to another availability zone, but you might experience some downtime and message loss during the migration.