kubectl create ns <rtf_namespace>
Installing Multiple Instances of Runtime Fabric on a Single OpenShift Cluster
Installing multiple instances of Anypoint Runtime Fabric enables you to share the same cluster among multiple Runtime Fabrics, which helps you to use resources efficiently.
Install Multiple Instances of Runtime Fabric
To install multiple instances of Anypoint Runtime Fabric on a single OpenShift cluster, follow these steps:
-
Create a Runtime Fabric using Runtime Manager for OpenShift.
-
Create a custom namespace to install the Runtime Fabric:
-
Create a Docker pull secret to pull the Runtime Fabric component images for the previously created namespace:
kubectl create secret docker-registry rtf-pull-secret --namespace <rtf_namespace> --docker-server=<docker_registry_url> --docker-username=<docker_registry_username> --docker-password=<docker_ registry_password>
(Optional) Configure Authorized Namespaces
You can optionally configure authorized namespaces, which enable you to deploy Runtime Fabric alongside other services in a Kubernetes cluster.
Before You Begin
Before configuring authorized namespaces, note the following:
-
You must create the
authorized-namespaces
ConfigMap file before installing Runtime Fabric. Additionally, you must name the ConfigMap,authorized-namespaces
. -
The
rtf:resource-metrics-collector
ClusterRole has cluster-wide permissions toget
andlist
nodes, pods, and namespaces and haswatch
permissions for nodes. The role ClusterRole is defined as follows:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: rtf:resource-metrics-collector labels: {{- include "labels.standard" . | nindent 4 }} rules: - apiGroups: [""] resources: ["nodes", "pods", "namespaces"] verbs: ["list", "get"] - apiGroups: [""] resources: ["nodes"] verbs: ["watch"]
-
In your cluster, create an additional namespace for application deployments, and add the necessary labels to the namespace. To do so, create a YAML file with the following contents:
apiVersion: v1 kind: Namespace metadata: name: <namespace> labels: rtf.mulesoft.com/agentNamespace: <rtf_namespace> rtf.mulesoft.com/envId: <environment_id> rtf.mulesoft.com/org: <org_id> rtf.mulesoft.com/role: workers
-
Apply the file you just created:
oc apply -f <filename>.yaml
-
Repeat steps 1 and 2 to add as many namespaces as you need.
-
Create the RoleBinding for the Runtime Fabric agent ClusterRole that includes the Runtime Fabric agent ServiceAccount. To do so, apply the following configuration in your additional namespace:
kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: <name> namespace: <app_namespace> subjects: - kind: ServiceAccount name: rtf-agent namespace: <rtf_namespace> roleRef: kind: ClusterRole name: rtf:agent apiGroup: rbac.authorization.k8s.io
-
Apply the following RoleBinding template to rtf namespace and any additional namespaces:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rtf namespace: <rtf_namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:anyuid subjects: - kind: ServiceAccount name: rtf-agent namespace: <rtf_namespace> - kind: ServiceAccount name: mule-clusterip-service namespace: <rtf_namespace> - kind: ServiceAccount name: resource-cache namespace: <rtf_namespace> - kind: ServiceAccount name: rtf-persistence-gateway namespace: <rtf_namespace> - kind: ServiceAccount name: cluster-status namespace: <rtf_namespace> - kind: ServiceAccount name: am-log-forwarder namespace: <rtf_namespace>
For
<additional-namespace>
, use the same template as well:apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rtf namespace: <app_namespace> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:openshift:scc:anyuid subjects: - kind: ServiceAccount name: rtf-agent namespace: <rtf_namespace> - kind: ServiceAccount name: mule-clusterip-service namespace: <rtf_namespace> - kind: ServiceAccount name: resource-cache namespace: <rtf_namespace> - kind: ServiceAccount name: rtf-persistence-gateway namespace: <rtf_namespace> - kind: ServiceAccount name: cluster-status namespace: <rtf_namespace> - kind: ServiceAccount name: am-log-forwarder namespace: <rtf_namespace>
-
In your cluster, create a ConfigMap file named
authorized-namespaces
and list any additional namespaces. Note that the additional namespace mapping keys must be unique since they use the standard K8s resource (ConfigMap). There is no specific requirements on the format of the key name provided they are unique.apiVersion: v1 kind: ConfigMap metadata: name: authorized-namespaces namespace: <rtf_namespace> data: APPLICATION_NAMESPACE_1: "<app_namespace_1>" APPLICATION_NAMESPACE_2: "<app_namespace_1>
-
If, after fully installing Runtime Fabric, you later add or delete any namespaces from the ConfigMap, you must restart the Runtime Fabric agent pod. To do so, run the following command:
oc -nrtf delete po -l app=agent
After you delete the pod, Kubernetes starts a new one.
-
Install the Red Hat OpenShift Runtime Fabric Operator
You install the Runtime Fabric operator (rtf-agent-operator
) from the OperatorHub.
-
In the Red Hat OpenShift console, navigate to Operators > OperatorHub.
-
In the OperatorHub search field, search for the Runtime Fabric operator.
-
In the rtf-agent-operator dialog, click Install.
Installing the Runtime Fabric operator requires manual approval and may take several minutes to complete. |
Values.yaml File Required Parameters
The values for these required parameters are set when you create the Runtime Fabric instance in Runtime Manager. If you’re not using a local registry, use the default values for the registry URL and pull secret.
Key | Value | Example |
---|---|---|
|
Activation Data |
YW55cG9pbnQubXVsZXNvZnQuY29tOjBmODdmYzYzLTM3MWUtNDU2Yy1iODg5LTU5NTkyNjYyZjUxZQ== |
|
Registry URL |
US rtf-runtime-registry.kprod.msap.io EU rtf-runtime-registry.kprod-eu.msap.io |
|
Registry pull secret |
rtf-pull-secret |
|
Mule license for applications |
Mule license key (must be Base64-encoded) |
Values.yaml Optional Parameters
Set the following optional parameters as needed:
Key | Description | Example |
---|---|---|
|
Enables shared tenancy |
authorizedNamespaces=true |
|
Enables installation of Crds and PriorityClass |
install=true |
|
Proxy and no_proxy values |
- http://<user>:<pass>@<10.0.0.1>:<8080> |
|
Monitoring proxy values |
socks5://<user>:<pass>@<10.0.0.2>:<8080> |
|
Filebeat read path |
- /var/lib/docker/ |
For the first agent being installed on the cluster, set the value for crds.install to true .Set crds.install to false for all the subsequent agent installations on the same cluster.
|
Values.yaml Reference
The following is an example of the values.yaml
file:
activationData: <activation_data> proxy: http_proxy: http_no_proxy: monitoring_proxy: custom_log4j_enabled: true muleLicense: <mule_license_key> global: crds: install: true authorizedNamespaces: false image: rtfRegistry: rtf-runtime-registry.kprod.msap.io pullSecretName: rtf-pull-secret containerLogPaths: - /var/lib/docker/containers - /var/log/containers - /var/log/pods
Installing Subsequent Instances of Runtime Fabric
To install subsequent instances of Runtime Fabric for OpenShift, you must repeat the previous steps:
-
Create a Runtime Fabric using Runtime Manager for OpenShift.
-
Create a custom namespace to install the Runtime Fabric:
kubectl create ns <rtf_namespace>
-
Create a Docker pull secret to pull the Runtime Fabric component images for the previously created namespace:
kubectl create secret docker-registry rtf-pull-secret --namespace <rtf_namespace> --docker-server=<docker_registry_url> --docker-username=<docker_registry_username> --docker-password=<docker_ registry_password>
-
In the Red Hat OpenShift console, navigate to Operators > Installed Operators.
-
In the console, switch the value of Project to the namespace you created for installing Runtime Fabric.
-
In the console, click Create Instance, and select Configure via form view.
-
Add any values.yaml required parameters.
-
Set
authorizedNamespaces
totrue
, if you’re using authorized namespaces. -
Set the value for
crds.install
tofalse
. -
Click Create.