platformConfiguration:
fileSystemCsiDriverName: "" # CSI driver name (e.g., nfs.csi.k8s.io, efs.csi.aws.com)
NFS Server Prerequisites
| Before you install Anypoint Platform Private Cloud Edition (Anypoint Platform PCE), your infrastructure team must review each of the following sections and verify that your environment meets the stated requirements. If needed, contact your MuleSoft representative for assistance to ensure your NFS server is configured correctly. |
CSI Driver Requirement
Starting with Anypoint Platform PCE 4.2.0, a pre-installed Container Storage Interface (CSI) driver is a mandatory requirement. Anypoint Platform PCE uses the Kubernetes CSI driver to mount NFS volumes into pods, replacing the previous method that relied on host binaries.
Some key benefits:
-
Enhanced support for environments where custom tools cannot be installed on host nodes.
-
Direct support for Amazon EKS users who can only use the Amazon EFS CSI Driver.
Configure the CSI Driver
In the platformConfiguration section of your input.yaml file, specify the CSI driver name:
EFS CSI Driver Configuration
When using the Amazon EFS CSI Driver (for example, for Amazon EKS with EFS installations), provide four individual CSI volume handles. These volume handles must point to four access points that all target the same directory in the EFS:
platformConfiguration:
fileSystemCsiDriverName: "efs.csi.aws.com"
fileSystemWcCsiVolumeHandle: "" # CSI volume handle for wc filesystem
fileSystem01CsiVolumeHandle: "" # CSI volume handle for 01 filesystem
fileSystemBareCsiVolumeHandle: "" # CSI volume handle for bare filesystem
fileSystemBackupRestoreCsiVolumeHandle: "" # CSI volume handle for backup-restore filesystem
Each volume handle value follows the format <file-system-id>::<access-point-id>, for example: fs-0835b0d1eb7588eb1::fsap-03f6c68563ceb5acf.
When using the EFS CSI Driver, you do not need to provide fileSystemDNS or fileSystemPath values.
|
Verify NFS Server Setup
Anypoint Platform PCE version 2.0 and later requires a Network File System (NFS) server and an available NFS endpoint. Your NFS server must meet the following requirements:
-
Must be running on a filesystem that has at least 250 GB available.
-
Must be using NFS 4.x.
-
Can be backed by a network-attached storage (NAS) system.
-
Must have port 2049 enabled for both TCP and UDP.
-
Configure your network so that all of the PCE nodes can access the NFS server. The direct NFS client is Kubernetes 1.9 and requires conformance with NFS version 4.
NFS 4.x does not use the portmapper service.
Verify Required Port Is Open
The NFS server requires port 2049 to be open for TCP and UDP.
On each node, use netcat or a similar utility to verify that the required port is open. Netcat is a computer networking utility used to read and write to network connections using TCP or UDP.
On each node, run: nc -zv <host_name> 2049
By default, the netcat tool is not included with Anypoint Platform PCE. If you receive a command not found error, you can download and install netcat or a similar utility.
|
When the responses from netcat are successful, you receive a response similar to the following example:
[ec2-user@ip-10-1-1-97 ~]$ nc -zv <nfs_host_name> 2049 Ncat: Version 7.50 ( https://nmap.org/ncat ) Ncat: Connected to 10.1.0.155:2049. Ncat: 0 bytes sent, 0 bytes received in 0.01 seconds.
Verify NFS Server Is Mounted
On each node, mount the NFS server in a temporary directory.
-
Switch to root:
sudo su -
To mount the NFS in a temporary directory, run the following commands and replace both the
<nfs-server>and the<nfs-path>with your own values:
mkdir -p /mnt/home
mount -t nfs4 -o proto=tcp,port=2049 <nfs-server>:<path> /mnt/home/
If the mount fails, you do not have the required permissions.
Verify NFS Performance
Perform careful analysis of your environment, both from the client and server point of view for optimal NFS performance. This involves:
-
Testing the performance and latency of your NFS server. Specifically, these operations will transfer data from the client (Anypoint Platform PCE environment) to the NFS server and measure how long each transfer takes.
-
Using the command
time dd if=/dev/zero of=/mnt/home/<nameOfFile> bs=<blockSize> count=<amountOfBlocks>to test NFS server performance while transferring files and specifying the following parameters:-
Names
<nameOfFile> -
Block size and number of blocks in the transfer:
<blockSize>and<amountOfBlocks>, respectively.
-
-
Validating how Anypoint Platform PCE handles NFS file transfers. Perform read and write operations with both large (128MB) and small (80KB) files.
Perform all tests on each node.
Test Writing and Reading Large Files on the NFS Server
Perform the following read and write tests using large files:
-
File size: 128 MB
-
Block size: 4 KB
-
Number of blocks: 32768
While the tests are running, different files are generated (one file per test).
-
Create a directory in which to mount the NFS server:
sudo su mkdir -p /mnt/home
-
Mount the NFS server:
mount -t nfs4 -o proto=tcp,port=2049 <nfs-server>:<path> /mnt/home/
-
Perform writes (five files of 128 MB each) on the NFS server:
time for i in {1..5}; do dd if=/dev/zero of=/mnt/home/greatfile$i.test bs=4k count=32768; doneThe command output is similar to the following:
[root@ip-10-1-1-97 ec2-user]# time for i in {1..5}; do dd if=/dev/zero of=/mnt/home/greatfile$i.test bs=4k count=32768; done 32768+0 records in 32768+0 records out 134217728 bytes (134 MB) copied, 2.12805 s, 63.1 MB/s . . . real 0m8.378s user 0m0.034s sys 0m0.792sThe
realparameter in the test output should be less than 15 seconds.
Validate Read Performance
-
Unmount the NFS server:
umount /mnt/home
-
Remount the NFS server:
mount -t nfs4 -o proto=tcp,port=2049 <nfs-server>:<path> /mnt/home/
-
Perform reads from the NFS server by running the following command:
time for i in {1..5}; do dd if=/mnt/home/greatfile$i.test of=/dev/null bs=4k; doneThe
realparameter in the test output should be less than 15 seconds.
Test Writing and Reading Small Files on the NFS Server
Test small files as you did large files but with these sizes:
-
File size: 80 KB
-
Block size: 4 KB
-
Number of blocks: 20
Before starting the test, verify that the /mnt/home directory from the previous test is created.
-
Mount the NFS server:
mount -t nfs4 -o proto=tcp,port=2049 <nfs-server>:<path> /mnt/home/
-
Perform writes (five files of 80 KB each) on the NFS server:
time for i in {1..30}; do dd if=/dev/zero of=/mnt/home/smallfile$i.test bs=4k count=20; doneThe
realparameter in the test output should be around 2 seconds.
Validate Read Performance
-
Unmount the NFS server:
umount <nfs-server>
-
Remount the NFS server:
mount -t nfs4 -o proto=tcp,port=2049 <nfs-server>:<path> /mnt/home/
-
Perform reads from the NFS server:
time for i in {1..30}; do dd if=/mnt/home/smallfile$i.test of=/dev/null bs=4k; doneThe important parameter in the test output is the
realparameter, which should be around 2 seconds.
Override NFS Mount Options
By default, Anypoint Platform PCE 4.2.0 sets the noresvport NFS mount option, which is strongly recommended for AWS and other cloud environments. If this default conflicts with your environment requirements, you can override the NFS mount options by providing a comma-separated string in the platformConfiguration section of your input.yaml file:
platformConfiguration:
fileSystemBareMountOptionsOverride: ""
fileSystemWcMountOptionsOverride: ""
fileSystem01MountOptionsOverride: ""
For example, to remove the noresvport option, provide a string with your custom mount options:
nfsvers=4.0,retrans=2,rsize=1048576,soft,sync,timeo=600,wsize=1048576
| These fields are optional. If not specified, the default mount options are used. |



