sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity.mount sudo systemctl start var-lib-gravity.mount
Hardware Prerequisites
To ensure the performance and stability of Anypoint Platform Private Cloud Edition (Anypoint Platform PCE), every node in your Anypoint Platform PCE environment must meet the requirements described in this topic.
Before you install Anypoint Platform PCE, your infrastructure team must review each of the following sections and verify that your environment meets the stated disk and device requirements. If needed, contact your MuleSoft representative for assistance. |
There are 2 components of PCE:
-
Anypoint Platform Core
-
Anypoint Monitoring (optional)
To ensure platform performance, stability, and high availability, Anypoint Platform Private Cloud Edition (Anypoint Platform PCE) supports two network configurations: 4-node and 7-node.
Small configuration:
-
4 nodes for Control Plane
-
3 additional nodes for Anypoint Monitoring (optional add-on)
-
NFS for asset storage
-
Load balancer
Large configuration:
-
7 nodes for Control Plane
-
3 additional nodes for Monitoring (optional)
-
NFS for asset storage
-
Load balancer
To ensure high availability and performance, each node must run on its own server in both configurations. If you are using a virtual machine to host the nodes, you must ensure that each VM node is running on a separate physical server.
Anypoint Platform Core
Prerequisites for 4-nodes or 7-nodes to run the Anypoint Platform Core.
Memory and CPU Requirements
The following are the minimum requirements needed for the Anypoint Platform core. Depending on your environment, you may need more CPU or memory.
Component | Requirement |
---|---|
RAM |
32 GB |
CPU |
8 Cores |
If the stated requirements for Memory and CPU are not met, Anypoint Platform PCE installation will fail. |
For production environments, each node in your configuration must have the following dedicated devices. All devices must be formatted either as xfs or ext4.
Component | Size | Minimum IOPS | Mount Point | Description |
---|---|---|---|---|
250 GB |
1500 |
/var/lib/gravity |
Stores system configuration and metadata, including databases and packages. Because package sizes can be large, it is important to estimate the minimum size requirements and allocate enough space as a dedicated device before installation. |
|
HDD 3 Application Data Device |
250 GB |
1500 |
/var/lib/data |
Stores application configuration and data. The amount of space required should be at least 250 GB, but can vary depending on your specific use case. It is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time. |
HDD 4 Docker Device |
100 GB |
3000 |
/var/lib/gravity/planet/docker |
Used by Docker’s overlay storage driver. A minimum of 100GB is required for the overlay directory. Devices with 50GB or less will experience degraded system performance or might not work at all. |
HDD 5 Etcd Device |
20 GB |
3000 |
/var/lib/gravity/planet/etcd |
Provides dedicated storage for the distributed database used for cluster coordination. |
/tmp (Installer) |
50 GB |
N/A |
||
/ (root) |
50 GB |
N/A |
You must have at least 50GB in your home directory to download and unzip the installer file. |
The space requirements listed in the previous table are based on average use. For environments with heavy traffic, validate the amount of space needed with MuleSoft Professional Services. |
System State Directory Device
The main purpose of the system state directory device is to store system configuration and metadata, for example, database and packages. As package sizes can be arbitrarily large, it is important to estimate the minimum size requirements and allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mounted as /var/lib/gravity
.
The following shell snippet is provided as an example for mounting the system state directory device using systemd mount files and cannot be used as-is in production. You can also mount the system state directory device by using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example to guide this process, make sure to specify the correct device name in two places.
Application Data Device
The main purpose of the application data directory device is to store application configuration and data. The minimum amount of space required is 250GB, but your use case might require more. It is important to allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mounted as /var/lib/data
.
The following shell snippet is provided as an example for mounting the application data device using systemd mount files, and cannot be used as-is in production. You can also mount the application data device using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example to guide this process, make sure to specify the correct device name in two places.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/data echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/data\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-data.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-data.mount sudo systemctl start var-lib-data.mount
Etcd Device
The main purpose of the etcd device is to provide dedicated storage for a distributed database used for cluster coordination. It does not require much space, 20GB is adequate.
Format this device either as xfs
or ext4
and mounted as /var/lib/gravity/planet/etcd
.
The following shell snippet mounts the etcd device using systemd mount files to the path /var/lib/gravity/planet/etcd
.
You can also mount the etcd device using /etc/fstab
or another method. Consult with your system administrator to
determine the best way to meet this requirement. If you use the following example as a guide for this process,
specify the correct device name in two places.
Mount this device AFTER you mount the System State device under /var/lib/gravity/planet . If you are using systemd units to mount the volumes, declare mount dependencies using After=<device-name>.<mount> .
|
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity/planet/etcd echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity/planet/etcd\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity-planet-etcd.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity-planet-etcd.mount sudo systemctl start var-lib-gravity-planet-etcd.mount
Docker Device
Unless specified, Docker configuration defaults to the use of the overlay driver, which is recommended for production.
Format this device as xfs
or ext4
and mount as /var/lib/gravity/planet/docker
.
The following example shell snippet mounts the Docker device using systemd
mount files to the path
/var/lib/gravity/planet/docker
. You can also mount the Docker device using /etc/fstab
or another method.
Consult with your system administrator to determine the best way to meet this requirement.
If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
Mount this device AFTER you mount the System State device under /var/lib/gravity/planet . If you are using system.d units to mount the volumes, declare mount dependencies using After=<device-name>.<mount>
|
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity/planet/docker echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity/planet/docker\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity-planet-docker.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity-planet-docker.mount sudo systemctl start var-lib-gravity-planet-docker.mount
Anypoint Monitoring
Anypoint Monitoring is an optional add-on that Anypoint Private Cloud supports. See Anypoint Monitoring on Anypoint Platform PCE for more information.
Prerequisites: The extra 3 nodes for the Monitoring Center Add-On.
Memory and CPU Requirements
Monitoring Center Add-On hardware requirements are different from the Platform Core nodes, they require more CPU, more memory and different disk mounts.
Component | Requirement |
---|---|
RAM |
128 GB |
CPU |
32 Cores |
For production environments, each node in your configuration must have the following dedicated devices. All devices must be formatted either as xfs or ext4.
Component | Size | Minimum IOPS | Mount Point | Description |
---|---|---|---|---|
250 GB |
1500 |
/var/lib/gravity |
Stores system configuration and metadata, including databases and packages. Package sizes can be large, so it is important to estimate the minimum size requirements and allocate enough space as a dedicated device before installation. |
|
HDD 3 Docker Device |
100 GB |
3000 |
/var/lib/gravity/planet/docker |
Used by Docker’s overlay storage driver. A minimum of 100GB is required for the overlay directory. Devices with 50GB or less will experience degraded system performance or might not work at all. |
HDD 4 Cassandra AMV Device |
500 GB |
1500 |
/var/lib/data/dias-meld & /var/lib/data/dias-cassandra |
Stores Cassandra database data for Monitoring Center Add-On. The amount of space required should be at least 250 GB, but can vary depending on your specific use case. It is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time. |
HDD 5 Stolon AMV Device |
250 GB |
1500 |
/var/lib/data/stolon-amv |
Stores Postgres database data for Monitorng Center Add-On. The amount of space required should be at least 250 GB, but can vary depending on your specific use case. It is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time. |
HDD 7 InfluxDB AMV Device |
4 TB |
1500 |
/var/lib/data/influxdb |
Stores InfluxDB data for Monitoring Center Add-On. The amount of space required should be at least 250 GB, but can vary depending on your specific use case. It is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time. |
HDD 7 logstash AMV Device |
500 GB |
1500 |
/var/lib/data/logstash |
Stores Logstash data for Monitoring Center Add-On. The amount of space required should be at least 250 GB, but can vary depending on your specific use case. It is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time. |
/tmp (Installer) |
20 GB |
N/A |
||
/ (root) |
10 GB |
N/A |
You must have at least 10GB in your home directory to download and unzip the installer file. |
The space requirements listed in the previous table are based on average use. For environments with heavy traffic, validate the amount of space needed with MuleSoft Professional Services. |
System State Directory Device
The main purpose of the system state directory device is to store system configuration and metadata, for example, database and packages. As package sizes can be arbitrarily large, it is important to estimate the minimum size requirements and allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mount as /var/lib/gravity
.
The following shell snippet is provided as an example for mounting the system state directory device using systemd
mount files and cannot be used as-is in production. You can also mount the system state directory device by using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity.mount sudo systemctl start var-lib-gravity.mount
Docker Device
Unless specified, Docker configuration defaults to the use of the overlay driver, which is recommended for production.
Format this device as xfs
or ext4
and mount as /var/lib/gravity/planet/docker
.
The following example shell snippet mounts the Docker device using systemd
mount files to the path:
/var/lib/gravity/planet/docker
. You can also mount the Docker device using /etc/fstab
or another method.
Consult with your system administrator to determine the best way to meet this requirement.
If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
Mount this device AFTER you mount the System State device under /var/lib/gravity/planet . If you are using system.d units to mount the volumes, declare mount dependencies using After=<device-name>.<mount>
|
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity/planet/docker echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity/planet/docker\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity-planet-docker.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity-planet-docker.mount sudo systemctl start var-lib-gravity-planet-docker.mount
DIAS Data Device
The main purpose of the DIAS Data directory device is to store Monitoring Center Add-On Cassandra and Meld database data. The minimum amount of space required is 250GB, but your use case might require more. It is important to allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mount as /var/lib/data
.
The following shell snippet is provided as an example for mounting the application data device using systemd
mount files, and cannot be used as-is in production. You can also mount the application data device using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/data echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/data\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-data-dias.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-data-dias.mount sudo systemctl start var-lib-data-dias.mount
Once mounted, create the following dirs if necessary on every Monitor Center Add-On node:
-
/var/lib/data/dias-cassandra
-
/var/lib/data/dias-meld
Stolon AMV Device
The main purpose of the Stolon AMV directory device is to store Monitoring Center Add-On Postgres database data. The minimum amount of space required is 250GB, but your use case might require more. It is important to allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mount as /var/lib/data/stolon-amv
.
The following shell snippet is provided as an example for mounting the application data device using systemd
mount files, and cannot be used as-is in production. You can also mount the application data device using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/data/stolon-amv echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/data/stolon-amv\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-data-stolon-amv.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-data-stolon-amv.mount sudo systemctl start var-lib-data-stolon-amv.mount
InfluxDB AMV Device
The main purpose of the InfluxDB AMV directory device is to store Monitoring Center Add-On InfluxDB data. The minimum amount of space required is 250GB, but your use case might require more. It is important to allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mount as /var/lib/data/influxdb
.
The following shell snippet is provided as an example for mounting the application data device using systemd
mount files, and cannot be used as-is in production. You can also mount the application data device using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/data/influxdb echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/data/influxdb\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-data-influxdb.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-data-influxdb.mount sudo systemctl start var-lib-data-influxdb.mount
Once mounted, create the following dirs if necessary on every Monitor Center Add-On node:
-
/var/lib/data/influxdb/data
-
/var/lib/data/influxdb/meta
Logstash AMV Device
The main purpose of the Logstash AMV directory device is to store Monitoring Center Add-On Logstash database data. The minimum amount of space required is 250GB, but your use case might require more. It is important to allocate enough space for the dedicated device.
Format this device either as xfs
or ext4
and mount as /var/lib/data/logstash
.
The following shell snippet is provided as an example for mounting the application data device using systemd
mount files, and cannot be used as-is in production. You can also mount the application data device using /etc/fstab
or some other method. Consult with your system administrator to determine the best way to meet this requirement. If you use the following example as a guide for this process, replace <device name>
with the correct device name in two places.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/data/logstash echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/data/logstash\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-data-logstash.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-data-logstash.mount sudo systemctl start var-lib-data-logstash.mount
Verify Devices
To verify that all devices are set up the same way on all nodes, run the lsblk
command on the Anypoint Monitoring and Visualizer nodes. Output from lsblk
should resemble the following example:
[root@ip-0-0-0-0 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT xvda 202:0 0 90G 0 disk ├─xvda1 202:1 0 1M 0 part └─xvda2 202:2 0 90G 0 part / xvdb 202:16 0 100G 0 disk /var/lib/gravity/planet/docker xvdd 202:48 0 250G 0 disk /var/lib/data xvde 202:64 0 250G 0 disk /var/lib/gravity xvdf 202:64 0 250G 0 disk /var/lib/data/stolon-amv xvdg 202:64 0 250G 0 disk /var/lib/data/influxdb xvdh 202:64 0 250G 0 disk /var/lib/data/logstash xvdi 202:64 0 20G 0 disk /tmp
Verify Disk IOPS
Depending on your hardware or virtualization provider, you can configure disk IOPS (I/O operations per second). Using the iops
command, verify available IOPS on each device:
$ sudo ./iops --time 2 /dev/xvdb /dev/xvdb, 107.37 GB, 32 threads: 512 B blocks: 1893.0 IO/s, 946.5 KiB/s ( 7.8 Mbit/s) 1 KiB blocks: 1354.8 IO/s, 1.3 MiB/s ( 11.1 Mbit/s) 2 KiB blocks: 1091.8 IO/s, 2.1 MiB/s ( 17.9 Mbit/s) 4 KiB blocks: 807.1 IO/s, 3.2 MiB/s ( 26.4 Mbit/s) 8 KiB blocks: 803.7 IO/s, 6.3 MiB/s ( 52.7 Mbit/s) 16 KiB blocks: 787.4 IO/s, 12.3 MiB/s (103.2 Mbit/s) 32 KiB blocks: 700.8 IO/s, 21.9 MiB/s (183.7 Mbit/s) 64 KiB blocks: 590.0 IO/s, 36.9 MiB/s (309.3 Mbit/s) 128 KiB blocks: 327.6 IO/s, 40.9 MiB/s (343.5 Mbit/s) ...
Load Balancer Requirements
Anypoint Platform PCE must run in a production environment with multiple servers. To distribute traffic among servers and to restrict access to specific ports, you must provide and configure a load balancer.
You can use any load balancer, including NGINX, but NGINX is not required.
If using the AWS Provisioner for Anypoint Platform PCE, you do not need to configure a separate load balancer. An Amazon ELB with the required configuration is created during installation. |
Load Balancer Configuration
You can configure your load balancer to use any method for distributing client requests, but in most contexts a round robin strategy is ideal. The load balancer should be accessible through an IP address that is available to all machines in your network.
Your load balancer must route the following TCP ports:
Load Balancer Port | Instance Port | Internal Usage |
---|---|---|
|
|
HTTP redirects to HTTPs |
|
|
HTTPS port |
|
|
HTTPS port for Mule runtimes to authenticate and renew certificates |
|
|
WebSocket port for Mule runtimes to connect |
|
|
(Only when Monitoring Center is enabled) |
|
|
Ops Center access port |
In each case, your load balancer must listen on the load balancer port and redirect incoming requests to the instance port. Your Anypoint Platform PCE installation includes an internal NGINX server that listens on each of the configured instance ports, and then performs the action listed in the Internal Usage column.
Your load balancer should poll the address HTTP:10256/healthz
to run a health check on your platform servers and confirm that they are accessible.
Do not configure SSL certificates in your load balancer. TLS termination is handled by the platform with the certificates configured using Access Management. See Configure Anypoint Platform PCE. |
Implement a Load Balancer Using NGINX
-
Enable stream block in your
/etc/nginx/nginx.conf
file by referencing all the configuration properties defined in/etc/nginx/stream.d/*
.stream { include /etc/nginx/stream.d/*.conf; }
-
Delete the
default.conf
file from/etc/nginx/conf.d
-
Create a folder named
/etc/nginx/stream.d
and in it create a file namedonprem.conf
-
Paste the following content in
onprem.conf
server { listen 80; proxy_pass 0.0.0.0:30080; } server { listen 443; proxy_pass 0.0.0.0:30443; } server { listen 8083; proxy_pass 0.0.0.0:30883; } server { listen 8889; proxy_pass 0.0.0.0:30889; } server { listen 8895; proxy_pass 0.0.0.0:30895; } server { listen 9500; proxy_pass 0.0.0.0:32009; } server { listen 9501; proxy_pass 0.0.0.0:30083; }
-
Enable the default ports on SELinux before starting the NGINX. Run the following commands one by one:
semanage port -a -t http_port_t -p tcp 8083 semanage port -a -t http_port_t -p tcp 8889 semanage port -a -t http_port_t -p tcp 8895 semanage port -a -t http_port_t -p tcp 9500 semanage port -a -t http_port_t -p tcp 9501
-
Start NGINX:
service nginx restart