Maximum TTL.
Using Persistence Gateway with Runtime Fabric
Anypoint Runtime Fabric provides Persistence Gateway, which enables Mule applications deployed to a Mule runtime instance to store and share data across application replicas and restarts.
You can also use the Mule Maven plugin version 3.5.4 or later to enable persistent object stores for Runtime Fabric deployments. |
Using Persistence Gateway in a Mule Application
After Persistence Gateway is configured in Anypoint Runtime Fabric, it is available for Mule applications deployed to Mule runtime engine, version 4.2.1 or later. When configured, users can select Use Persistent Object Storage when deploying an application using Runtime Manager. See Deploy a Mule Application to Runtime Fabric for more information.
Mule applications use the Object Store v2 REST API via the Object Store Connector to connect to Persistence Gateway. This enables you to deploy to both Anypoint Runtime Fabric and CloudHub without having to modify your Mule application.
After deleting an application, persistent data may not be deleted immediately. Runtime Fabric cleans persistence data every 60 minutes. |
Persistent Gateway Limits
The following table lists the limits on the data stored by Persistence Gateway:
Limit | Description |
---|---|
The amount of time data is stored in Persistence Gateway, value is set to 30 days. |
Before You Begin
Before enabling Persistence Gateway, ensure that you have:
-
Created a PostgreSQL database to serve as the data source for data stored by Persistence Gateway. This database must be compatible with a supported version of PostgreSQL that does not reserve the
partition
keyword. -
Granted the PostgreSQL user CREATE, INSERT, SELECT, UPDATE, and DELETE privileges.
Only plain text connections to the database are supported. |
Configure Persistence Gateway
During configuration, Persistence Gateway creates the required database schema. Afterwards, when an application deployed to Runtime Fabric is configured to use persistent object storage, the Persistence Gateway writes the necessary rows to the database.
To configure Persistence Gateway, you must create a Kubernetes custom resource that allows the cluster to connect to your persistence data store.
-
Create a Kubernetes secret:
kubectl create secret generic <SECRET NAME> -n rtf --from-literal=persistence-gateway-creds='postgres://username:pass@host:port/databasename'
-
Do not change
--from-literal=persistence-gateway-creds
. You can replace<SECRET NAME>
with any value you want to use for the secret. -
Do not use the "@" character in the database username.
-
If your PostgresSQL user account password contains special characters, make sure to encode your connection URL before pushing it as Kubernetes secret. For more information, see RTF Persistent Gateway Does Not Allow Login Due to Account Password With Special Characters.
-
-
Create a custom resource for your data store:
-
Copy the custom resource template from Kubernetes Custom Resource Template to a file called
custom-resource.yaml
. -
Ensure the value of
secretRef: name
matches thename
field defined in your Kubernetes secret file. -
Modify other fields of the custom resource template as required for your environment.
-
Run
kubectl apply -f custom-resource.yaml
.
-
-
Check the logs of the Persistence Gateway pod to ensure it can communicate with the database:
kubectl get pods -n rtf
Look for pods with the name prefix
persistence-gateway
kubectl logs -f persistence-gateway-6dfb98949c-7xns9 -n rtf
The output of this command should be similar to the following:
2021/04/09 16:35:31 Connecting to PostgreSQL backend... 2021/04/09 16:35:32 Starting watcher for /var/run/secrets/rtf-object-store/persistence-gateway-creds 2021/04/09 16:35:32 Watching for changes on /var/run/secrets/rtf-object-store/persistence-gateway-creds 2021/04/09 16:35:35 Successfully connected to the PostgreSQL backend. 192.168.2.101 - - [09/Apr/2021:16:35:55 +0000] "GET /api/v1/status/ready HTTP/1.1" 200 2 "" "kube-probe/1.18+"
Persistence Gateway with Authorized Namespaces
If you use authorized namespace and want to use persistence gateway, edit the RoleBinding by adding a new rtf-persistence-gateway
subject:
subjects:
- kind: ServiceAccount
name: rtf-persistence-gateway
namespace: <rtf_namespace>
Kubernetes Custom Resource Template
Use the example template below to create a Kubernetes custom resource. This custom resource enables the Kubernetes cluster to connect to your data store.
apiVersion: rtf.mulesoft.com/v1
kind: PersistenceGateway
metadata:
name: default
namespace: rtf
spec:
objectStore:
backendDriver: postgresql
maxBackendConnectionPool: 20
replicas: 2
secretRef:
name: persistence-gateway-creds
resources:
limits:
cpu: 250m
memory: 250Mi
requests:
cpu: 200m
memory: 75Mi
Modify the following fields based on your environment:
Field | Description | Default Value |
---|---|---|
|
The type of custom resource. The only supported value is |
PersistenceGateway |
|
The internal identifier for this custom resource. The value for this field should be |
default |
|
The namespace where the secret is applied. The supported value is |
rtf |
|
The driver used by the data store. Only |
postgresql |
|
The maximum number of simultaneous open connections to the data store. |
20 |
|
The number of replicas of Persistence Gateway. |
2 |
|
The CPU resource limits for the Persistence Gateway pods. |
250m |
|
The memory resource limits for the Persistence Gateway pods. |
150Mi |
|
The CPU resource requests for the Persistence Gateway pods. |
200m |
|
The memory resource requests for the Persistence Gateway pods. |
75Mi |
|
The name of the Persistence Gateway credentials defined in the Kubernetes secret file. |
persistence-gateway-creds |
The default CPU, memory, and limit values are based on a small number of deployed Mule applications. Modify these values based on the requirements of your environment. |
Persistence Gateway Application Data
Persistence Gateway stores application data using the following two tables:
CREATE TABLE IF NOT EXISTS stores (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
org_id VARCHAR(255) NOT NULL,
env_id VARCHAR(255) NOT NULL,
default_ttl_seconds int NOT NULL,
is_fixed_ttl bool NOT NULL,
CONSTRAINT UK_stores UNIQUE (name, org_id, env_id)
);
CREATE TABLE IF NOT EXISTS items (
id SERIAL PRIMARY KEY,
store_id INT NOT NULL REFERENCES stores(id),
key VARCHAR(255) NOT NULL,
partition VARCHAR(255) NOT NULL,
value_type VARCHAR(10) NOT NULL,
number_value integer,
string_value text,
binary_value bytea,
last_updated timestamp,
is_fixed_ttl bool NOT NULL,
ttl timestamp NOT NULL,
CONSTRAINT UK_items UNIQUE (key, store_id, partition)
);
CREATE INDEX IF NOT EXISTS IDX_items_ttl ON items(ttl)
;`
To migrate the persistent data from one cluster to another, backup these two tables entirely from the source cluster and recreate them on the target one database. Do not to have applications deployed during this process on both clusters to prevent unexpected results.