Contact Us 1-800-596-4880

Installing Runtime Fabric on Red Hat OpenShift

You can install Anypoint Runtime Fabric on a Red Hat OpenShift installation.

Runtime Fabric supports the following Red Hat OpenShift deployment options:

  • Red Hat OpenShift Service on AWS

  • Microsoft Azure Red Hat OpenShift

  • Red Hat OpenShift Dedicated

  • Red Hat OpenShift on IBM Cloud

  • Self-managed Red Hat OpenShift editions (Performance Plus, OCP, Kubernetes engine)

Install Runtime Fabric on Red Hat OpenShift

When you install Runtime Fabric on Red Hat OpenShift, you’ll:

  1. Create a Runtime Fabric using Runtime Manager

  2. Create a namespace for Runtime Fabric

  3. Create a Docker pull secret for pulling the Runtime Fabric component images

  4. Optionally, configure shared tenancy

  5. Install and configure the Runtime Fabric operator

  6. Complete the remaining installation steps to validate your Runtime Fabric and configure ingress

Before You Begin

Before installing Runtime Fabric on Red Hat OpenShift, ensure that:

Create a Runtime Fabric using Runtime Manager

To install Runtime Fabric on Red Hat OpenShift, first create a Runtime Fabric using Runtime Manager. This is required to obtain the activation data which is needed during installation.

  1. From Anypoint Platform, select Runtime Manager.

  2. Click Runtime Fabrics.

  3. Click Create Runtime Fabric.

  4. Enter the name of the new Runtime Fabric, then select the Red Hat OpenShift option.

  5. Review the Support responsibility disclaimer, then if you agree click Accept.

  6. Click Operator.

  7. Copy the activation data.

Create A Namespace for Runtime Fabric

You must create a namespace named rtf in your Kubernetes cluster. This namespace is where you install Runtime Fabric components.

To create the namespace, run:

oc create ns rtf

Create A Docker Pull Secret

After you create the namespace, create a pull secret so you can retrieve the Docker images needed to install and run Runtime Fabric.

The default registry URL is rtf-runtime-registry.kprod.msap.io. If you’re using a local registry, specify those values here.

To create the pull secret, run:

oc create secret docker-registry <pull_secret> --namespace rtf --docker-server=<docker_registry_url> --docker-username=<docker_registry_username> --docker-password=<docker_ registry_password>

(Optional) Configure Authorized Namespaces

You can optionally configure authorized namespaces, which enable you to deploy Runtime Fabric alongside other services in a Kubernetes cluster.

Before You Begin

Before configuring authorized namespaces, note the following:

  • You must create the authorized-namespaces ConfigMap file before installing Runtime Fabric. Additionally, you must name the ConfigMap, authorized-namespaces.

  • The rtf:resource-metrics-collector ClusterRole has cluster-wide permissions to get and list nodes, pods, and namespaces and has watch permissions for nodes. The role ClusterRole is defined as follows:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: rtf:resource-metrics-collector
      labels:
        {{- include "labels.standard" . | nindent 4 }}
    rules:
      - apiGroups: [""]
        resources: ["nodes", "pods", "namespaces"]
        verbs: ["list", "get"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["watch"]
    1. In your cluster, create an additional namespace for application deployments, and add the necessary labels to the namespace. To do so, create a YAML file with the following contents:

      apiVersion: v1
      kind: Namespace
      metadata:
        name: <namespace>
        labels:
          rtf.mulesoft.com/envId: <ENVIRONMENT-ID>
          rtf.mulesoft.com/org: <ORG-ID>
          rtf.mulesoft.com/role: workers
    2. Apply the file you just created:

      oc apply -f <filename>.yaml
    3. Repeat steps 1 and 2 to add as many namespaces as you need.

    4. Create the RoleBinding for the Runtime Fabric agent ClusterRole that includes the Runtime Fabric agent ServiceAccount. To do so, apply the following configuration in your additional namespace:

      kind: RoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: <name>
        namespace: <additional-namespace>
      subjects:
        - kind: ServiceAccount
          name: rtf-agent
          namespace: rtf
      roleRef:
        kind: ClusterRole
        name: rtf:agent
        apiGroup: rbac.authorization.k8s.io
    5. Apply the following RoleBinding template to rtf namespace and any additional namespaces:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: rtf
        namespace: rtf
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:openshift:scc:anyuid
      subjects:
        - kind: ServiceAccount
          name: rtf-agent
          namespace: rtf
        - kind: ServiceAccount
          name: mule-clusterip-service
          namespace: rtf
        - kind: ServiceAccount
          name: resource-cache
          namespace: rtf
        - kind: ServiceAccount
          name: rtf-persistence-gateway
          namespace: rtf
        - kind: ServiceAccount
          name: cluster-status
          namespace: rtf
        - kind: ServiceAccount
          name: am-log-forwarder
          namespace: rtf

      For <additional-namespace>, use the same template as well:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: RoleBinding
      metadata:
        name: rtf
        namespace: <app-namespace-name>
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: system:openshift:scc:anyuid
      subjects:
        - kind: ServiceAccount
          name: rtf-agent
          namespace: rtf
        - kind: ServiceAccount
          name: mule-clusterip-service
          namespace: rtf
        - kind: ServiceAccount
          name: resource-cache
          namespace: rtf
        - kind: ServiceAccount
          name: rtf-persistence-gateway
          namespace: rtf
        - kind: ServiceAccount
          name: cluster-status
          namespace: rtf
        - kind: ServiceAccount
          name: am-log-forwarder
          namespace: rtf
    6. In your cluster, create a ConfigMap file named authorized-namespaces and list any additional namespaces. Note that the additional namespace mapping keys must be unique since they use the standard K8s resource (ConfigMap). There is no specific requirements on the format of the key name provided they are unique.

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: authorized-namespaces
        namespace: rtf
      data:
        ADDITIONAL_NAMESPACE-1: "additional-namespace1"
        ADDITIONAL_NAMESPACE-2: "additional-namespace2"
    7. If, after fully installing Runtime Fabric, you later add or delete any namespaces from the ConfigMap, you must restart the Runtime Fabric agent pod. To do so, run the following command:

      oc -nrtf delete po -l app=agent

      After you delete the pod, Kubernetes starts a new one.

Install the Red Hat OpenShift Runtime Fabric Operator

You install the Runtime Fabric operator (rtf-agent-operator) from the OperatorHub.

  1. In the Red Hat OpenShift console, navigate to Operators > OperatorHub.

  2. In the OperatorHub search field, search for the Runtime Fabric operator.

  3. In the rtf-agent-operator dialog, click Install.

Installing the Runtime Fabric operator requires manual approval and may take several minutes to complete.

Configure the Runtime Fabric Operator

To configure the Runtime Fabric operator, you supply the necessary values when prompted.

  1. In the Red Hat OpenShift console, navigate to Operators > Installed Operators.

  2. In the console, switch the value of Project to the namespace you created for installing Runtime Fabric.

  3. In the console, click Create Instance, and select Configure via form view.

    Do not change the name of the instance. Doing so can create installation errors.

  4. Add any required parameters. Refer to the Installation Parameters Reference for guidance.

    If you’re using authorized namespaces, set authorizedNamespaces to true.

  5. Click Create.

Installation Parameters Reference

The following is an example YAML view of the installation parameters.

activationData: <activation data>
OGQ=
proxy:
  http_proxy:
  http_no_proxy:
  monitoring_proxy:
custom_log4j_enabled: true
muleLicense: <mule license key>
global:
  authorizedNamespaces: false
  image:
    rtfRegistry: <rtf-runtime-registry.kqa.msap.io or local registry value>
    pullSecretName: rtf-pull-secret
  containerLogPaths:
  - /var/lib/docker/containers
  - /var/log/containers
  - /var/log/pods

Required Parameters

The values for these required parameters are set when you create the Runtime Fabric in Runtime Manager. If you’re not using a local registry, use the default values for the registry URL and pull secret.

Key Value Example

activationData

Activation data

YW55cG9pbnQubXVsZXNvZnQuY29tOjBmODdmYzYzLTM3MWUtNDU2Yy1iODg5LTU5NTkyNjYyZjUxZQ==

rtfRegistry

Registry URL

rtf-runtime-registry.kprod.msap.io

pullSecretName

Registry secret

<pull_secret>

muleLicense

Mule license for applications

<mule_license_key>. Must be Base64 encoded.

Optional Parameters

Set these optional parameters as needed.

Key Value Example

customLog4jEnabled

Enables or disables custom Log4j configurations

customLog4jEnabled: true, default is false

authorizedNamespaces

Enables or disables additional namespaces

authorizedNamespaces: true, default is false =======

authorizedNamespaces

Enable authorized namespaces

authorizedNamespaces:true

  • proxy.http_proxy

  • proxy.http_no_proxy

Proxy and no_proxy values

proxy.monitoring_proxy

Anypoint Monitoring proxy values

socks5://<user>:<pass>@<10.0.0.2>:<8080>

global.containerLogPaths

The Filebeat read path

  • /var/lib/docker/

  • /var/log/containers

  • /var/log/pods

Insert the Mule License Key

If you didn’t add the Mule license key during install, you can add it using the rtfctl command line utility or Helm.

Before you install the license key, encode it to Base64 format.

Encode the License Key

  • On MacOS, run the following command:

    BASE64_ENCODED_LICENSE=$(base64 -b0 license.lic)
  • On Unix, run the following command:

    BASE64_ENCODED_LICENSE=$(base64 -w0 license.lic)
  • On Windows, choose one of the following:

    • Use a WSL or Cygwin shell that includes the base64 tool and use the above Unix command.

    • Use the base64.exe program included with Windows git (C:\Program Files\Git\usr\bin).

    • Use the following Powershell command:

      $BASE64_ENCODED_LICENSE=[convert]::ToBase64String((Get-Content -path "license.lic" -Encoding byte))

Apply the License Key Using rtfctl

  1. On the controller node acting as the leader during installation (the installer node), run the following command:

    rtfctl apply mule-license $BASE64_ENCODED_LICENSE

    You can also apply the Mule license providing the file path directly:

    rtfctl apply mule-license --file /path/to/license.lic
  2. To verify the Mule license key has applied correctly, run:

    rtfctl get mule-license

Apply the License Key Using Helm

To apply the license using Helm, run the following command:

helm upgrade --set muleLicense=$BASE64_ENCODED_LICENSE

Configure the Ingress Resource Template

If your ingress controller requires custom annotations and ingress class definition, follow the instructions in Defining a Custom Ingress Configuration.

For GKE customers, the ingress controller included with GKE will provision a separate HTTP load balancer per application by default. Please read this KB article for more details.

Validate Your Runtime Fabric

After completing the installation, your Runtime Fabric should be activated within your Anypoint organization. To validate your installation, go to Anypoint Runtime Manager and confirm that the status of the Runtime Fabric is Active.

Before deploying an application to your Runtime Fabric:

  1. Associate the Runtime Fabric with at least one Anypoint environment.

  2. Review and update the Inbound Traffic settings based upon your Kubernetes environment.

  3. Deploy an application to verify that Runtime Fabric is installed and configured correctly.

Prepare for Deploying Mule Apps to Red Hat OpenShift

Before you deploy any Mule applications to your Red Hat OpenShift cluster, ensure that you’ve installed your Mule license key and then, perform the following steps.

Also, before attempting to deploy an Mule applications, ensure that you’ve installed your Mule license key.

  1. Create a namespace for your Mule app deployments. See Creating Application Namespaces for Application Deployments.

  2. From Runtime Manager, deploy a Mule app using your namespace. See Deploy Mule Applications to Runtime Fabric for instructions.