Contact Us 1-800-596-4880

AWS Prerequisites

If you are using Amazon Web Services (AWS), you must create the resources required to install Anypoint Platform Private Cloud Edition (Anypoint Platform PCE) on AWS. Anypoint Platform PCE supports 4-node and 7-node configurations in a production environment on AWS.

You do not need this information if you are starting from bare metal servers.

Your infrastructure team, or someone with AWS administrator level knowledge, must perform the following tasks. If needed, contact your MuleSoft representative for help.

AWS Account Permissions and Resources

To install Anypoint Platform PCE on AWS:

  • Your AWS account must have AWS keys with EC2FullAccess and S3FullAccess permissions.

  • When you create your AWS environment, the following resources are created:

    Table 1. AWS-Created Resources
    AWS Resource Number Required (4-node) Number Required (7-node) Anypoint Monitoring (Optional)





    root disk @ 500 iops




    EBS volumes @ 1500 iops




    EBS volume @ 3000 iops




    Amazon ELB








Run the AWS Provisioner

MuleSoft provides a Docker image that you can use to provision the resources for your AWS account. You can also run additional custom shell scripts on the provisioned instances as shown in the Custom provisioning scripts section.

  1. Create an initial instance (t2.small) on your AWS account on any VPC with internet access.

    This instance runs the actual provisioning of the cluster. You must use an AMI that has Docker installed by default or you must manually install Docker after creating the AWS instance.

    You can also run provisioner remotely from any other machine with Docker and internet access.

  2. Download the Private Cloud Provisioner (PCP) Docker image from:

  3. Copy the provisioner Docker image to the instance via SCP:

    scp -i <guest>.pem ~/Downloads/private-cloud-provisioner-3.1.14.tar.gz ec2-user@W.X.Y.Z:/home/ec2-user
  4. SSH into the instance:

    ssh -i 'anypoint.pem' ec2-user@W.X.Y.Z
  5. Create the variable file (pce.env) with the environment details using the following values:

    Table 2. Required Environment Variables
    Name Description


    Specifies the AWS access ID Terraform uses to connect to your AWS account.


    Specifies the AWS access key Terraform uses to connect to your AWS account.


    Specifies the AWS temporary session token, if one exists.


    Specifies the AWS SSH key. Do not include the .pem extension.


    Specifies the AWS region where Terraform creates the cluster, for example, us-east-2.


    Specifies the ssh user, for example, ec2-user, centos, etc.


    List of trusted CIDRs that can be accessed from the environment. Use [] to open the cluster to the internet.


    Specifies the name of the cluster and the corresponding AWS resources.


    Must be set to node.


    Specifies the number of nodes in your cluster. Possible values are 4 and 7.


    Must be m5.2xlarge.


    Specifies the number of nodes for the AMV addon in your cluster. Must be 3 if planning to install the AMV addon. Omit otherwise.


    Must be m5.8xlarge if planning to install the AMV addon. Omit otherwise.


    Must be set to true for production environments.

    You can also include the following set of optional environment variables:

    Table 3. Optional Environment Variables
    Name Description


    Specifies the AMI name to be used for the instances. Use the AMI name, NOT the AMI ID. Ensure that the AMI does not provision any additional volumes. If you do not set this variable, the provisioner uses the following AMI by default: RHEL-8.3.0_HVM-20201031-x86_64-0-Hourly2-GP2

    TF_VAR_monitoring=<true or false>

    If true, creates basic instance alarms in Cloudwatch.

    TF_VAR_use_bastion=<true or false>

    If true, creates a small instance (using an ASG) in a public subnet as a jumpbox and associates a public IP address, and launches the cluster instances in the private subnets.

    TF_VAR_internal=<true or false>

    If true, the cluster instances are launched in private subnets and do not have public IPs associated with them. Also, the provisioned load balancer is internal only.


    Specifies an existing VPC that should already have an internet gateway attached to it. Subnets are still provisioned within the existing VPC.


    Enables you to pass a preferred CIDR for the new VPC to be provisioned. (Requires you to not pass a AWS_VPC_ID).


    Enables you to pass a list of subnets IDs on which the instances are launched. If you reuse your own subnets, you need to set up Route Tables, NAT Gateways, and an Internet Gateway, for example, ["subnet-0a17317984065f98f", "subnet-0600b4befb27c7949", "subnet-02103e0c935eff75a"].


    Enables you to optionally pass the preferred CIDRs blocks for the NEW subnets to be provisioned. You must pass the exact minimum number between the number of AZs of the region and the number of nodes you selected for your cluster, for example, '["", "", ""]'.


    Enables you to optionally pass the preferred CIDRs blocks for the NEW subnets to be provisioned. You must pass the exact minimum number between the number of AZs of the region and the number of nodes you selected for your cluster, for example, '["", "", ""]'.


    Enables you to specify a ROLE tag value to be applied to all AWS resources.

  6. Load the provisioner Docker image into the local Docker registry:

    docker load -i private-cloud-provisioner-4.0.13.tar.gz
  7. Perform a dry-run test:

    docker run --rm --env-file pce.env dry-run
  8. Run the provisioner:

    docker run --rm --env-file pce.env cluster-provision

    After the provisioner runs successfully, it displays information about your environment including IP addresses and DNS name of the load balancer.

  9. Verify that the provisioning script ran successfully by checking the existence of /var/lib/bootstrap_complete on the instances

Custom Provisioning Scripts (Optional)

You can have your own shell scripts to run on the provisioned instances before and/or after the PCE provisioner scripts. To do so, place your shell scripts with .sh extension inside a folder named pre-user-data and/or post-user-data. Include the following volume mounts on the docker run command:

docker run --rm -v $(pwd)/pre-user-data:/usr/local/bin/provisioner/terraform/external/pre-user-data -v $(pwd)/post-user-data:/usr/local/bin/provisioner/terraform/external/post-user-data --env-file pce.env cluster-provision

Open Port 61009 Before Installation

If you are installing Anypoint Platform PCE using the GUI-based installer, you must enable this port before running the installer. You must open this port in the cluster’s security group before running the installer using AWS Web Console.

Destroy Resources

You can destroy all resources that were created using the provisioner by running the cluster-deprovision command, as shown in the following example. Ensure the TELEKUBE_CLUSTER_NAME environment variable in your environment file has the correct value for the target cluster you want to destroy:

docker run --rm --env-file pce.env cluster-deprovision


Review the following information in the event you experience problems when creating AWS resources.

  • 403 Forbidden

Check that your keys policies are not denying access to any EC2 or S3 resource. You should be able to run the following basic command using AWS CLI with your keys:

aws sts get-caller-identity
  • Limit exceeded

The AWS account might have some limits set on the amount of resources you can create. Delete some unused resources or request a limit increase on AWS.

  • AMI not found

Make sure you are using the AMI name as the value for the environment variable and not the ID.