Nav

About Throttling and Rate Limiting Policies (Nov 2017 and Jul 2017)

The Rate Limiting policy limits the number of requests an API accepts within a window of time. The API rejects requests that exceed the limit. You can configure multiple limits with window sizes ranging from milliseconds to years.

The Throttling policy queues requests that exceed limits for possible processing in a subsequent window. The API eventually rejects the request if processing cannot occur after a certain number of attempts. You can configure a delay between retries, as well as limit the number of retries.

The Rate Limiting and Throttling policies impose a limit on all requests to RAML-backed API resources if you applied the policy to that resource. The service level access (SLA)-based Rate Limiting and Throttling policies add further granularity, limiting requests by the level of access granted to the requesting application. These policies contain a persistence engine to preserve the current state of the policy in case of sudden restart (power outage).

How you configure the rate or throttling limit depends on whether the policy is SLA-based or not. If the policy is not SLA-based, you configure limits when you apply the policy.

limits

This example sets one limit of 10 requests (quota) per every 2 seconds (window).

If the policy is SLA-based, Mule Runtime fetches all contracts, which defines the relationship between an SLA and the application, for the API.

When you configure Mule Runtime in a cluster, you can configure the policy for distributed access control.

Rate Limiting and Throttling policies are designed to limit API access, but use different technologies: Rate limiting protects an API by applying a hard limit on its access. Throttling shapes API access by smoothing spikes in traffic.

Nov 2017: Under Mule Runtime 4.0, the Rate Limiting policy no longer propagates headers by default. You need to check the Expose Headers checkbox when you configure and apply the policy.

In this topic: