...
mule.cluster.nodes=192.168.10.21,192.168.10.22,192.168.10.23
mule.cluster.multicastenabled=false
mule.clusterId=<Cluster_ID>
mule.clusterNodeId=<Cluster_Node_ID>
...
Creating and Managing a Cluster Manually
Enterprise Edition
This page describes manual creation and configuration of a cluster. There are three ways to create and manage clusters:
-
Using the Mule Management Console’s graphical interface.
-
Using Runtime Manager - for more information, see Creating and Managing Clusters.
-
Manually, using a configuration file.
Do not attempt mixed management of clusters. If you create a cluster manually, do not attempt to manage it via the Management Console. The Management Console cannot recognize your manually-created cluster and overwrites your cluster configuration. |
All nodes in a cluster must have the same Mule runtime engine and Runtime Manager agent version. If you are using a cumulative patch release, such as 3.8.6-20210120, all instances of Mule must be the same cumulative patch version. |
Creating a Cluster Manually
Follow this procedure to create a cluster manually, using a configuration file.
-
Ensure that the node is not running, this means the Mule Runtime Server is stopped.
-
Create a file named
mule-cluster.properties
inside the node’s$MULE_HOME/.mule
directory. -
Edit the file with parameter = value pairs, one per line. See the example below.
|
-
Repeat this procedure for all Mule servers that you want to set in the cluster.
-
Start the Mule servers in the nodes.
For the full list of available parameters, see Cluster Configuration Parameters.
Managing a Cluster Manually
Manual management of a cluster is only possible for clusters that have been manually created, which are not managed by the Mule Management Console.
To manually change the configuration of a cluster node, follow these steps:
-
Stop the node’s Mule server.
-
Edit the node’s
mule-cluster.properties
as desired, then save the file. -
Restart the node’s Mule server.
Ensure consistency across nodes. Ensure that the options you apply in the configuration file are valid for all cluster nodes. Failure to do so can cause you to break the cluster configuration and inadvertently disable the cluster. |
Quorum management
When managing a manually configured cluster, you can now set a minimum quorum of machines required for the cluster to be operational.
When partitioning a network, clusters are available by default. However, by setting a minimum quorum size, you can configure your cluster to reject updates that do not pass a minimum threshold. This helps you achieve better consistency and protects your cluster in case of an unexpected loss of one of your nodes (Mule Runtimes in the cluster).
Under normal circumstances, if a node were to die in the cluster, you may still have enough memory available to store your data, but the number of threads available to process requests would be reduced as you now would have fewer nodes available and the partition threads in the cluster could quickly become overwhelmed. This could lead to
-
Clients left without threads to process their requests.
-
The remaining members of the cluster become so overwhelmed with requests that they’re unable to respond and are forced out of the cluster on the assumption that they are dead.
To protect the rest of the cluster in the event of member loss, you can set a minimum quorum size to stop concurrent updates to your nodes, and throwing a QuorumException whenever the number of active nodes in the cluster is below your configured value.
QuorumExceptions must be caught. When configuring a Quorum Size for your cluster, you need to catch the thrown exception to make some sort of decision (for example, send an email, stop a process, logging, retry strategies, and so on) |
To enable quorum, add the mule.cluster.quorumsize
property to the cluster configurations file {MULE_HOME}/.mule/mule-cluster.properties
defining the minimum number of nodes required to keep the cluster in an operational state.
Quorum feature is only valid for components that use Object Store. |
Object Store Persistence
You can persistently store JDBC data in a centralized system, accessible from all cluster nodes.
Check Mule’s hardware and software requirements for a list of supported relational database management systems. Keep in mind that, from that list, Oracle database support is a known limitation for Mule 3.8.x and it is planned for a future release. |
To enable object store persistence, you need to create a database and define its configuration values in the {MULE_HOME}/.mule/mule-cluster.properties
file:
-
mule.cluster.jdbcstoreurl
: The JDBC URL for connecting to the database -
mule.cluster.jdbcstoreusername
: Database username -
mule.cluster.jdbcstorepassword
: Database user password -
mule.cluster.jdbcstoredriver
: JDBC Driver class name -
mule.cluster.jdbcstorequerystrategy
: SQL dialect
The database’s tables are created automatically, as this feature creates tables for each different object store that you want to persist. Two tables are created per object store:
-
One table stores data
-
Another table stores partitions.
Recommendations for the Object Store Database
-
MuleSoft recommends that you create a dedicated database/schema that will only be used for the JDBC store.
-
The database username needs:
-
Permission to create objects in the database, which means DDL CREATE, DROP for tables.
-
DML permissions on the objects it creates (INSERT, UPDATE, DELETE, SELECT)
-
-
Always keep in mind that the data storage needs to be hosted in a centralized database reachable from all nodes. Don’t use more than one database per cluster.
Check the cluster configuration reference for persistency for details on how to configure these values. -
Some relational databases have certain constraints regarding the name length of tables. Use the
mule.cluster.jdbcstoretableNametransformerstrategy
property to transform long table names into shorter values.
Check the Table Name Transformers section for more details on how to configure this property. -
The persistent object store uses a database connection pool based on the
ComboPooledDataSource
Java class. The Mule runtime engine does not set any explicit values for the connection pool behavior. The standard configuration uses the default value for each property.
For example, the default value formaxIdleTime
is 0, which means that idle connections never expire and are not removed from the pool. Idle connections remain connected to the database in an idle state.
You can configure the connection pool behavior by passing your desired parameter values to the runtime, using either of the following options:-
Pass multiple parameters in the command line when starting Mule:
$ $MULE_HOME/bin/mule start \ -M-Dc3p0.maxIdleTime=<value> \ -M-Dc3p0.maxIdleTimeExcessConnections=<value>
Replace
<value>
with your desired value in milliseconds. -
Add multiple lines to the
$MULE_HOME/conf/wrapper.conf
file:wrapper.java.additional.<n>=-Dc3p0.maxIdleTime=<value> wrapper.java.additional.<n>=-Dc3p0.maxIdleTimeExcessConnections=<value>
Replace
<n>
with the next highest sequential value from thewrapper.conf
file.You can find more information about the pool configuration of the ComboPooledDataSource
Java class in this article.
-
Table Name Transformers
The mule.cluster.jdbcstoretableNametransformerstrategy
property allows you to define a custom transformer to modify table names.
For example, if you set the following property in mule-cluster.properties
:
mule.cluster.jdbcstoretableNametransformerstrategy=com.mulesoft.mule.cluster.hazelcast.persistence.query.MD5TableNameTransformerStrategy
The table names will be hashed using MD5 and a prefix to identify that these are Mule tables. Hashing table names guarantees that the length constraint is honored.
Optionally, you can create a custom transformer strategy by implementing the com.mulesoft.mule.cluster.hazelcast.persistence.query.TableNameTransformerStrategy
interface.
Monitoring
You can monitor the events thrown by the cluster members through the JMX technology.
The JMX monitoring option is disabled by default. To enable it, you need to add the mule.cluster.jmxenabled
property to the {MULE_HOME}/.mule/mule-cluster.properties
configuration file.
Please note that enabling JMX might cause some performance overheads given that the underlying structure adds listeners to get the statistics for each individual node.
Cluster Configuration Parameters
The following table lists the parameters of the mule-cluster.properties
file.
Property name | Description |
---|---|
|
Mandatory. Unique identifier for the cluster. It can be any alphanumeric string. |
|
Mandatory. The unique ID of the node within the cluster. It can be any integer between 1 and the number of nodes in the cluster. |
|
Comma-separated list of interfaces to use by Hazelcast. Wildcards are supported, as shown below. 192.168.1.*,192.168.100.25 |
|
The nodes that belong to the cluster, in the form The port number is optional; if not set, the default is 5701. To include more than one host, create a comma-separated list. This option configures the cluster with the specified fixed IP addresses. Use this option if you are not relying on multicast for cluster node discovery. If using this option, set Examples: Two nodes listening on port 9000: 172.16.9.24:9000,172.16.9.51:9000 Two nodes listening on port 5701: 192.168.1.19,192.168.1.20 |
|
Enables you to define the minimum number of machines required in a cluster to remain in an operational state. |
|
(Accepted values: |
|
Multicast group IP address to use. |
|
Multicast port number to use. |
Required only when storing persistent data. |
|
|
Required only when storing persistent data. |
|
Required only when storing persistent data. |
|
Required only when storing persistent data. |
|
Required only when storing persistent data. |
|
Allows you to define a custom transformer to modify table names. |
|
(Accepted values: |
|
(Accepted values: |