docker
Configuring Alerting on Runtime Fabric
Runtime Fabric on VMs / Bare Metal enables you to send alerts through an existing SMTP server. Runtime Fabric on VMs / Bare Metal provides alert functionality to send notifications when system health is compromised.
Built-in Alerts
The following table lists default alerts provided by Anypoint Runtime Fabric on VMs / Bare Metal:
| Alert | Description |
|---|---|
Sends an error when the Docker daemon is down. |
|
|
Sends an error when the etcd leader is unaccessible for longer than 5 min or if the etcd cluster is down. |
|
Sends a warning when follower-leader latency exceeds 500ms, and triggers an error if it exceeds 1 second over a one-minute period. |
|
Sends a warning when 80% of capacity is used. Triggers a critical error when more than 90% is used. Sends a warning when 90% of inode capacity is used. Triggers a critical error when more than 95% is used. |
|
Sends a warning when 75% of capacity used. Triggers a critical error when more than 90% is used. |
|
Sends a warning when 80% of memory is used. Triggers a critical error when more than 90% is used. |
|
Triggers an error if InfluxDB is down or if there is no connection between Kapacitor and InfluxDB. |
|
Triggers a critical error when more than 80% of a pod’s CPU capacity is used. Triggers a critical error when more than 80% of a pod’s memory capacity is used. |
|
Sends a warning when a node has not been reported as ready for 5 minutes. |
|
Triggers a critical error when more than 90% of CPU is used. Triggers a critical error when more than 90% of a pod’s memory is used. |
|
Triggers a critical error if networking is disabled on a node. |
|
Sends a warning when the system is inaccessible for longer than 5 min. |
Configure Alerts
You can send Runtime Fabric alerts from a designated alert sender email address through your SMTP server. You can configure only one email recipient for alerts.
-
Install
rtfctl.The
rtfctlutility is required to manage alert sender email on Runtime Fabric. Perform the steps in Install rtfctl before completing the following steps. -
Using a terminal, open a shell/SSH connection to a controller VM.
-
Create a file named
alert-smtp.yamlthat contains the following:kind: smtp version: v2 metadata: name: smtp spec: host: <smtp host> port: <smtp port> username: <username> password: <password> --- kind: alerttarget version: v2 metadata: name: email-alerts spec: email: <email>
-
Modify the contents of
alert-smtp.yamlusing the following values specific to your environment:Table 1. Environment Variables Key Description <smtp host>The endpoint of the SMTP server.
<smtp port>The port used to connect to the SMTP server. (465 by default).
<username>The username to use when connecting to the SMTP server.
<password>The password to use when connecting to the SMTP server.
<email>The recipient email address.
-
Run the following on the controller VM.
$ sudo gravity resource create -f alert-smtp.yaml
-
Use the
rtfctlutility to set the alert sender email address containing your SMTP domain. Use a distinguishable email address for each cluster to identify the cluster triggering the alert.$ sudo ./rtfctl appliance apply alert-smtp-from "sender@example.com"
Modify Email Alert Contents
If needed, you can modify the content of email alerts sent from Runtime Fabric. Alerts are written in TICKscript.
|
If you customize any Runtime Fabric email alerts, appliance upgrades overwrite the customizations. |
-
Navigate to Ops Center.
-
From the Ops Center menu, click Configuration.
-
From the Config maps drop-down menu, select kapacitor-alerts.
-
From the Namespace drop-down menu, select monitoring.
-
Select the alert you want to modify, and make changes.
-
When complete, click Apply.
-
Use SSH or Ops Center to log in to one of the controller nodes.
-
Get the kapacitor pod name:
kubectl -n monitoring get pod -l component=kapacitor
The name will look like
kapacitor-xxxxxxxxxx-xxxx. -
Get a list of configured alerts:
kubectl -n monitoring exec <pod name> -c alert-loader -- kapacitor -url http://localhost:9092 list tasks
-
Delete the alert you modified:
kubectl -n monitoring exec <POD NAME> -c alert-loader -- kapacitor -url http://localhost:9092 delete tasks <MODIFIED ALERT NAME>
-
Verify that the alert is re-created, the Status value is
enabled, and the Executing value istrue:kubectl -n monitoring exec <pod name> -c alert-loader -- kapacitor -url http://localhost:9092 list tasks
The result shows a list of alerts:
ID Type Status Executing Databases and Retention Policies <MODIFIED ALERT NAME> stream enabled true ["k8s"."default"] ...
-
Check the new alert contents:
kubectl -n monitoring exec kapacitor-85c9d79c9b-ttkdn -c alert-loader -- kapacitor -url http://localhost:9092 show <MODIFIED ALERT NAME>
The results looks like:
ID: <alert_name> Error: Template: Type: stream Status: enabled Executing: true ...



