Contact Us 1-800-596-4880

Install Runtime Fabric on AWS

This topic describes how to install Anypoint Runtime Fabric on VMs / Bare Metal on your AWS account. Note that you can also use AWS EKS to install Runtime Fabric on Self-Managed Kubernetes.

Before installing Runtime Fabric, refer to Shared Responsibility for Runtime Fabric on VMs / Bare Metal. If your infrastructure does not meet the minimum hardware, operating system, and networking requirements, Runtime Fabric on VMs / Bare Metal will not operate successfully.

Your organization’s operations, networking, and security teams must be involved. See Anypoint Runtime Fabric Installation Prerequisites.

Before You Begin

Before installing Anypoint Runtime Fabric on VMs / Bare Metal on AWS, ensure the following requirements have been met:

  • You have created a Runtime Fabric in Runtime Manager.

  • Your Anypoint Platform user account has the Manage Runtime Fabrics permission.

  • You are able to run Terraform on your machine.

  • Your AWS user has access to create EC2 instances, Disks, VPCs, and Security Groups.

  • Your AWS account has enough quota for the infrastructure required to run Anypoint Runtime Fabric. This includes having the capacity to create an additional VPC, EC2 instances, disks, security group, and other required resources.

  • You have the AWS key pair required to provision virtual machines. This key pair is required to enable secure access to your VMs via SSH (Secure Shell).

  • You have access to the following AWS-specific environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN, AWS_REGION, AWS_DEFAULT_REGION. These are required to run the Terraform script. If you do not know these values, contact your AWS administrator.

  • You have disabled any antivirus agents, such as McAfee, running in your environment.

Base64 Encode Your Mule License Key

To install Runtime Fabric, your Mule license key must be Base64 encoded. Locate your organization’s Mule Enterprise license key file (license.lic) and perform the following according to your operating system.

Linux

To encode your license file on Linux, run the following:

base64 -w0 license.lic

MacOS

To encode your license file on MacOS, run the following from the terminal:

base64 -b0 license.lic

Windows

To encode your license file on Windows, a shell terminal emulator (such as cygwin) or access to a Unix-based computer is required to follow these steps:

  1. Find your organization’s Mule Enterprise license key file (license.lic) and transfer to your Unix environment if necessary.

  2. Run the following command to Base64 encode the license key:

    base64 -w0 license.lic

Install Terraform

MuleSoft provides a Terraform script that you run to provision the required AWS resources. You can modify the provisioning script as needed for your installation: for example, you can enable disk encryption or encryption of EBS volumes. Runtime Fabric on VMs / Bare Metal is supported as long as you meet or exceed the minimum requirements specified in Runtime Fabric on VMs / Bare Metal Installation Prerequisites.

You can run the Terraform script in one of the following ways:

  • Download and manually install Terraform on your machine. Verify that the version of Terraform installed is 0.12.6 or later by running terraform --version on your terminal.

  • Use Terraform within Docker.

Download the Terraform Script

  1. Sign in to Anypoint Platform and navigate to Runtime Manager.

  2. On the left navigation pane, select Runtime Fabrics.

  3. Click the name of your Runtime Fabric. It should be in the Activating state.

  4. Click Download files.

  5. When the download completes, unzip the rtf-install-scripts.zip file.

Contents of the Runtime Fabric on VMs / Bare Metal Installer

Inside the rtf-install-scripts folder, there is an aws folder containing the following:

  • fabric.tf: Terraform script that provisions the infrastructure on your AWS account.

  • network_ingress.tf: Terraform script containing the Security Group rules used for network ingress.

  • network_egress.tf: Terraform script containing the Security Group rules used for network egress.

  • installer_env.sh: shell script of environment variables added to the controller VM acting as a leader during the installation.

  • controller_env.sh: shell script of environment variables added to the other controller VM(s).

  • worker_env.sh: shell script of environment variables added to the worker VM(s).

  • README.md: markdown file containing installation instructions.

Modify the fabric.tf Script

Before running Terraform, modify the fabric.tf script to reflect your specific environment. The tables below list the required and optional environment variables.

Depending on your environment, you may need to modify environment variables specified within fabric.tf that are not listed in the tables below. This is common if you need to use existing AWS resources such as an existing VPC.

In Runtime Fabric on VMs / Bare Metal, the inbound load balancer runs in either shared or dedicated mode:

  • Shared Mode enables you to specify the number of CPU cores and amount of memory for the internal load balancer. In shared mode, the internal load balancer is distributed across the controller nodes. Shared Mode is the default setting.

  • Dedicated Mode specifies that all available resources are dedicated to the internal load balancer, so you cannot choose the number of CPU cores and memory amount. In dedicated mode, the internal load balancer is deployed on dedicated internal load balancer nodes.

Refer to Manage Runtime Fabric for detailed information.

Required Environment Variables

Variable Description Example

activation_data

The encoded Runtime Fabric activation data. You can access this data by viewing your Runtime Fabric in Runtime Manager.

NzdlMzU1YTktMzAxMC00OGE0LWJlMGQtMDdxxxx

key_pair

The name of the keypair in the AWS region you are deploying to.

my-keypair

enable_public_ips

specifies whether the installer creates public IP addresses for each VM. Public IPs enable you to ssh directly to the VMs from your network. If this value is set to false(default) each VM only has access to the private network configured by its VPC. If you specify false, ensure you have consulted with your network administrator on how to obtain shell/SSH access to VMs.

true

controllers

the number of controller VMs to provision.

3

workers

the number of worker VMs to provision.

3

mule_license

the base64 encoded contents of your organization’s Mule Enterprise license key (license.lic).

Optional Environment Variables

Variable Description Example

controllers

The number of controller machines to provision. The default value is 3.

5

disable_selinux

A boolean variable that enables/disables SELinux on the host operating system. The default is false.

true

enable_public_ips

A boolean varaible that indicates if public IP addresses should be specified. The default value is false.

true

existing_subnet_ids

A list of existing subnet IDs within AWS. Required when installing on an existing network. The installer uses these subnets instead of creating a new one.

existing_vpc_id

The VPC ID of an existing AWS Virtual Private Cloud (VPC). Required when installing on an existing VPC. The installer uses this VPC instead of creating a new one.

http_proxy

The hostname and port for a HTTP proxy server to forward outbound HTTP requests.

1.1.1.1:80

installer_url

The URL of the Runtime Fabric installer package.

kubernetes_api_cidr_blocks

A CIDR range that enables access to the kubernetes API. The default is an empty array: [].

monitoring_proxy

The hostname and port, and optional username and password for a SOCKS5 proxy server to forward metrics and logs to Anypoint Monitoring.

user:pass@1.1.1.2:800

mule_license

The base64 content of your Mule license.

no_proxy

A comma-separated list of hosts which should bypass the proxy.

1.1.1.1,no-proxy.com

ops_center_cidr_blocks

A CIDR range that allows access to the Ops Center UI. The default is an empty array: [].

pod_network_cidr_block

Provides support for a custom pod CIDR block.

10.244.0.0/16

service_cidr_block

Provides support for a custom service CIDR block.

10.100.0.0/16

service_uid

An integer representing the user ID to run each Runtime Fabric service. Overrides the default behavior of creating a user named "planet".

1000

service_gid

An integer representing the group ID used when running each Runtime Fabric service. Overrides the default behavior of creating a group named "planet".

900

workers

The number of worker machines to provision. The default value is 3.

5

Configure Your Environment to Access AWS

To run the Terraform script, AWS API access should be configured in your terminal. Define values for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and other environment variables including AWS_SESSION_TOKEN, AWS_REGION, or AWS_DEFAULT_REGION. These are required to access the AWS API for your account.

To verify that these are set correctly, you should be able to run the aws-cli tool, if installed on your machine.

Run the Terraform Script

After modifying fabric.tf to reflect your AWS environment, run the Terraform script to install Runtime Fabric on VMs / Bare Metal.

The internal load balancer is distributed across the controller nodes for shared mode and is deployed on the internal load balancer nodes for dedicated mode.

Terraform State Files

When running, Terraform generates a state file to capture the details of the deployment. In the example below, the state file are located in tf-data/rtf.tfstate. A separate state file must be created and maintained for each Runtime Fabric infrastructure creation.

Keep the state file in a safe place. It is required when making changes to this deployment, such as when scaling the number of worker or controller VMs.

Run Using Native Terraform

  1. Navigate to the ../rtf-install-scripts/aws/ directory. You must run Terraform from this directory.

  2. Initialize Terraform (this is only performed once):

    terraform init
  3. Copy the following script to a text editor:

    terraform apply \
      -var activation_data='' \
      -var key_pair='' \
      -var enable_public_ips='' \
      -var controllers='3' \
      -var workers='3' \
      -var mule_license='' \
      -state=tf-data/rtf.tfstate
  4. Modify this using the data in the environment variables tables above.

  5. Ensure your terminal has access to the AWS-specific environment variables required as described above.

    If you experience an error related to AWS authorization, ensure you’re using the same terminal window for verifying the variables and running the Terraform command.

  6. Run the script.

    The Terraform script provisions the infrastructure and runs the installation script on each VM. When the installation script completes, the Runtime Fabric is updated to Active status in Runtime Manager.

    This step installs Runtime Fabric on VMs / Bare Metal across all servers to form a cluster. It may take 15-25 minutes or longer to complete.

Run Terraform Within Docker

  1. Navigate to the ../rtf-install-scripts/ directory. You must run Terraform from this directory. When running the dir or ls command, you should see the aws directory listed along with other directories (azure, manual, etc).

  2. Initialize Terraform (this only has to be performed once):

    docker run -v $(pwd):/src -w /src/aws \
      -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN \
      hashicorp/terraform:0.12.6 init
  3. Copy the following to a text editor:

      -var activation_data='' \
      -var key_pair='' \
      -var enable_public_ips='' \
      -var controllers='3' \
      -var workers='3' \
      -var mule_license='' \
      -state=tf-data/rtf.tfstate
  4. Modify this information using the data in the environment variables tables previously provided.

  5. Run the Terraform script:

    docker run -it -v $(pwd):/src -w /src/aws \
      -e AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY -e AWS_SESSION_TOKEN \
      hashicorp/terraform:0.12.6 apply \
      -var activation_data='' \
      -var key_pair='' \
      -var enable_public_ips='' \
      -var controllers='3' \
      -var workers='3' \
      -var mule_license='' \
      -state=tf-data/rtf.tfstate
    • The Terraform script provisions the infrastructure and runs the installation script on each VM. When the installation is complete, the Runtime Fabric is shown in Active status in Runtime Manager.

      This step installs Runtime Fabric on VMs / Bare Metal across all servers to form a cluster. It might take 15 to 25 minutes or longer to complete.

Monitoring Installation Progress

To view the progress during the installation, tail the output log on each VM:

  1. Open a shell (or SSH session) to the first controller VM.

  2. Tail the output log, located at /var/log/rtf-init.log using the following command:

    tail -f /var/log/rtf-init.log
You can tail the same log on each VM to view its progress.

When the installation completes successfully, the installer creates the /opt/anypoint/runtimefabric/.state/init-complete file.

Access Ops Center

After installation is completed successfully, login to Ops Center to view the status of your Runtime Fabric infrastructure. See Using Ops Center on Anypoint Runtime Fabric for information on accessing Ops Center and determining the Ops Center username and password.

By default, the Terraform script configures the AWS Security Group to not expose the Ops Center port to the internet. To use a public IP to manage your cluster using the Ops Center, the controller nodes must be provisioned with a public IP using the enable_public_ips variable. You must adjust your Security Group to allow 0.0.0.0 internet access for TCP connections on port 32009.