Contact Us 1-800-596-4880

Anypoint Runtime Fabric on VMs / Bare Metal

Anypoint Runtime Fabric on VMs / Bare Metal Architecture

Anypoint Runtime Fabric on VMs / Bare Metal Architecture is composed of a set of VMs that form a cluster. Each VM serves as either a controller node or a worker node.

architecture controller worker
  • Controller: a VM dedicated for operating Runtime Fabric, including orchestration services, a distributed database, load balancing, and services that enable you to manage the cluster from Anypoint Platform.

  • Worker: a VM dedicated for running Mule applications and API gateways. Mule applications and API proxies run on workers.

This separation of responsibilities enables scaling of the worker nodes based on the number of Mule applications. It also enables scaling the controller nodes based on the frequency of deployments, changes in application state, and amount of inbound traffic. To ensure resources are available to re-schedule and re-deploy applications in the event of a hardware failure, MuleSoft recommends over-provisioning the number of worker nodes.

By default, the services operating Runtime Fabric are deployed across the controller nodes to avoid a single point of failure in the system.

Anypoint Runtime Fabric on VMs / Bare Metal Architecture uses a set of technologies, including Docker and Kubernetes, which are tuned to operate well with Mule runtimes. Knowledge of these technologies is not required to deploy or manage Mules on Runtime Fabric. Managing Runtime Fabric requires the operational and infrastructure-level experience needed to support any system at scale. We recommend following best practices and running fire drill scenarios in controlled environments to help prepare for unexpected failures.

Deployments of applications and gateways not powered by Mule on Anypoint Runtime Fabric is not supported.

Development and Production Configurations

Anypoint Runtime Fabric on VMs / Bare Metal supports development and production configurations. These supported configurations specify the minimum nodes and resources required.

Development Configuration

The development configuration is intended for testing only. It requires at least one controller and two worker nodes. The controller node runs the internal load balancer and agents used to connect to Anypoint Platform. Communication between the agent and Anypoint Platform is always outbound. Multiple replicas of application can run across workers.

architecture development

Production Configuration

architecture production

Only controllers run the internal load balancer and agents used to connect to Anypoint Platform.

Agents can run on any controller. Agent communication is always outbound.

The minimum requirements are 3 controller and 3 worker nodes. 3 controllers enable a fault-tolerance of losing 1 controller. To improve fault-tolerance to lose 2 controllers, a total of 5 controllers should be configured.

Mule applications run on workers. Multiple replicas of applications can run across workers.

Network Architecture

The following diagram shows the general network architecture of Runtime Fabric on VMs / Bare Metal.

architecture network

This diagram shows the TCP load balancer used to load balance requests across the internal load balancers running on the controllers. It also shows the internal load balancer that distributes requests to each replica of Mule applications running on the workers.

When you enable an internal load balancer, the following modes are supported:

  • Shared Mode

    The default setting if no dedicated internal load balancer node is added in Runtime Fabric on VMs / Bare Metal. In this mode, the internal load balancer runs on all controller nodes with the specified amount of CPU and memory.

  • Dedicated Mode

    Enabled only if one or more dedicated internal load balancer nodes is available in Runtime Fabric on VMs / Bare Metal. Because dedicated internal load balancer nodes use all available resources for running the internal load balancer, you cannot specify the amount of CPU cores and memory.

Higher inbound traffic loads require dedicated nodes. To determine if dedicated nodes are needed for your environment, refer to the performance information in Internal Load Balancer.

architecture network2
  1. The incoming HTTP request is forwarded to the external TCP load balancer.

  2. The TCP load balancer forwards the request to an available internal load balancer on Runtime Fabric.

  3. The internal load balancer decrypts the request and directs it to an available replica of the Mule application (app2) in the diagram above.

  4. The application sends a response which is routed back to the requester.

Runtime Fabric and Other PaaS Providers

Anypoint Runtime Fabric on VMs / Bare Metal contains all of the components it requires. These components, including Docker and Kubernetes, are optimized to work efficiently with Mule runtimes and other MuleSoft services.

Installing Anypoint Runtime Fabric on VMs / Bare Metal within an existing Kubernetes-based PaaS is not supported. In this situation, consider using Runtime Fabric on Self-Managed Kubernetes.

If you are already using a PaaS solution, MuleSoft recommends deploying Runtime Fabric on VMs / Bare Metal in parallel with your PaaS. This enables you to take advantage of the complete benefits of Anypoint Platform.

Shared Responsibility for Anypoint Runtime Fabric on VMs / Bare Metal

The successful operation of Anypoint Runtime Fabric on VMs / Bare Metal is a shared responsibility. It is critical to understand which areas you must manage and which areas are managed by MuleSoft. Runtime Fabric operates successfully only if it is provided the required infrastructure and maintenance.

The following image illustrates MuleSoft and customer responsibilities for on-premises Runtime Fabric instances:

Runtime Fabric Shared Responsiblities
  • MuleSoft responsibility:

    MuleSoft manages the Runtime Fabric appliance and is responsible for the delivered components, Runtime Fabric appliance, Runtime Fabric agent, Mule runtime engine, and other dependencies for Mule applications.

  • Customer responsibility:

    Customers are responsible for provisioning, configuring, and managing the infrastructure required for Runtime Fabric on VMs / Bare Metal.

    • Infrastructure includes:

      • VM resources (CPU, Memory)

      • Disk performance and capacity

      • Operating systems and kernel patching

      • Network ports

      • Synchronization of system time across all VMs

    • For infrastructure provisioning and management, you will need the assistance of the following teams in your organization:

      • DevOps team to provision and manage the infrastructure

      • Network team to specify allowed ports and configure proxy settings

      • Security team to verify compliance and obtain security certificates

Connecting Runtime Fabric to Anypoint Management Center

Anypoint Runtime Fabric supports the following:

  • Deploying applications from Anypoint Runtime Manager.

  • Deploying policy updates of API gateways using API Manager.

  • Storing and retrieving assets with Anypoint Exchange.

To enable integration with the Anypoint Management Center, Runtime Fabric requires outbound access to Anypoint Platform on port 443. This connection is secured using mutual TLS. A set of services running on the controller VMs initiates outbound connections to retrieve the metadata and assets required to deploy an application. These services then translate and communicate with other internal services to cache the assets locally and deploy the application.

Check with your network administrator about enabling required outbound connections from your organization’s network.