Contact Us 1-800-596-4880

Configure Alerting on Runtime Fabric

Runtime Fabric on VMs / Bare Metal enables you to send alerts through an existing SMTP server. Runtime Fabric on VMs / Bare Metal provides alert functionality to send notifications when system health is compromised.

Built-in Alerts

The following table lists default alerts provided by Anypoint Runtime Fabric on VMs / Bare Metal:

Alert Description

docker

Sends an error when the Docker daemon is down.

etcd

Sends an error when the etcd master is unaccessible for longer than 5 min or if the etcd cluster is down.

etcd_latency_batch

Sends a warning when follower-leader latency exceeds 500ms, and triggers an error if it exceeds 1 second over a one-minute period.

filesystem

Sends a warning when 80% of capacity is used. Triggers a critical error when more than 90% is used. Sends a warning when 90% of inode capacity is used. Triggers a critical error when more than 95% is used.

high-cpu

Sends a warning when 75% of capacity used. Triggers a critical error when more than 90% is used.

high_memory

Sends a warning when 80% of memory is used. Triggers a critical error when more than 90% is used.

influxdb_health_batch

Triggers an error if InfluxDB is down or if there is no connection between Kapacitor and InfluxDB.

ingress

Triggers a critical error when more than 80% of a pod’s CPU capacity is used. Triggers a critical error when more than 80% of a pod’s memory capacity is used.

kubernetes_node

Sends a warning when a node has not been reported as ready for 5 minutes.

muleapps

Triggers a critical error when more than 90% of CPU is used. Triggers a critical error when more than 90% of a pod’s memory is used.

network_params

Triggers a critical error if networking is disabled on a node.

uptime

Sends a warning when the system is inaccessible for longer than 5 min.

Configure Alerts

You can send Runtime Fabric alerts from a designated alert sender email address through your SMTP server. You can configure only one email recipient for alerts.

  1. Install rtfctl.

    The rtfctl utility is required to manage alert sender email on Runtime Fabric. Perform the steps in Install rtfctl before completing the following steps.

  2. Using a terminal, open a shell/SSH connection to a controller VM.

  3. Create a file named alert-smtp.yaml that contains the following:

    kind: smtp
    version: v2
    metadata:
      name: smtp
    spec:
      host: <smtp host>
      port: <smtp port>
      username: <username>
      password: <password>
    ---
    kind: alerttarget
    version: v2
    metadata:
      name: email-alerts
    spec:
      email: <email>
  4. Modify the contents of alert-smtp.yaml using the following values specific to your environment:

    Table 1. Environment Variables
    Key Description

    <smtp host>

    The endpoint of the SMTP server.

    <smtp port>

    The port used to connect to the SMTP server. (465 by default).

    <username>

    The username to use when connecting to the SMTP server.

    <password>

    The password to use when connecting to the SMTP server.

    <email>

    The recipient email address.

  5. Run the following on the controller VM.

    $ sudo gravity resource create -f alert-smtp.yaml
  6. Use the rtfctl utility to set the alert sender email address containing your SMTP domain. Use a distinguishable email address for each cluster to identify the cluster triggering the alert.

    $ sudo ./rtfctl appliance apply alert-smtp-from "sender@example.com"

Modify Email Alert Contents

If needed, you can modify the content of email alerts sent from Runtime Fabric. Alerts are written in TICKscript.

If you customize any Runtime Fabric email alerts, appliance upgrades overwrite the customizations.

  1. Navigate to Ops Center.

  2. From the Ops Center menu, click Configuration.

  3. From the Config maps drop-down menu, select kapacitor-alerts.

  4. From the Namespace drop-down menu, select monitoring.

  5. Select the alert you want to modify, and make changes.

  6. When complete, click Apply.

  7. Use SSH or Ops Center to log in to one of the controller nodes.

  8. Get the kapacitor pod name:

    kubectl -n monitoring get pod -l component=kapacitor

    The name will look like kapacitor-xxxxxxxxxx-xxxx.

  9. Get a list of configured alerts:

    kubectl -n monitoring exec <pod name> -c alert-loader -- kapacitor -url http://localhost:9092 list tasks
  10. Delete the alert you modified:

    kubectl -n monitoring exec <POD NAME> -c alert-loader -- kapacitor -url http://localhost:9092 delete tasks <MODIFIED ALERT NAME>
  11. Verify that the alert is re-created, the Status value is enabled, and the Executing value is true:

    kubectl -n monitoring exec <pod name> -c alert-loader -- kapacitor -url http://localhost:9092 list tasks

    The result shows a list of alerts:

    ID                    Type      Status    Executing Databases and Retention Policies
    <MODIFIED ALERT NAME>                stream    enabled   true      ["k8s"."default"]
    ...
  12. Check the new alert contents:

    kubectl -n monitoring exec kapacitor-85c9d79c9b-ttkdn -c alert-loader -- kapacitor -url http://localhost:9092 show <MODIFIED ALERT NAME>

    The results looks like:

    ID: <alert_name>
    Error:
    Template:
    Type: stream
    Status: enabled
    Executing: true
    ...