Free MuleSoft CONNECT Keynote & Expo Pass Available!

Register now+
Nav

Introduction to Mule 4: Execution Engine Threads and Concurrency

Mule 4 has an improved execution engine that simplifies the development and scaling of Mule apps. The underlying engine is based on a reactive, non-blocking architecture. This task-oriented execution model allows you to take advantage of non-blocking IO calls and avoid performance problems due to incorrect processing strategy configurations.

From a development point of view, there are a few major changes:

  • Exchange patterns no longer exist. All connectors receive responses. If you want to process a message asynchronously, use the Async component.

  • Every flow always uses a non-blocking processing strategy, and there is no need to configure processing strategies anymore.

  • There is a single, global thread pool for all flows.

Controlling Concurrency

Mule 4 decouples thread configuration from concurrency management. To control concurrency on a flow or any component that supports it, use the maxConcurrency attribute to set the number of simultaneous invocations that component can receive an any given time.


         
      
1
2
3
4
5
6
7
<flow maxConcurrency=1>
  <http:listener>
  <scatter-gather maxConcurrency=3>
    <route/>
    <route/>
  </scatter-gather>
</flow>

Thread Pools and Tuning apps

As mentioned above, there is now a single thread pool for all flows in an app. This decouples overall tuning from flow-specific behavior. Mule allocates threads in proportion to how many CPU cores your machine has. You can tune this thread count by using the self-documented conf/scheduler-pools.conf file.

Each Mule event processor can now inform the runtime if it is a CPU-intensive, CPU-light, or IO-intensive operation. This helps the runtime self-tune for different workloads dynamically, removing the need for you to manage thread pools manually. As a result, Mule 4 removes complex tuning requirements to achieve optimum performance.