You are viewing an older version of this topic. To go to a different version, use the version menu at the upper-right. +

To Migrate Anypoint Platform Private Cloud Edition, Version 1.6.0 to 1.6.1 (One Node)

This topic describes how to migrate a one node installation Anypoint Platform Private Cloud Edition version 1.6.0 to version 1.6.1.

During the migration process, there may be some downtime as each node is migrated.


Before migrating, ensure that you have performed and met the following prerequisites:

  • Perform a backup of your system as described in About Backup and Recovery.

  • Ensure that your environment meets all of the system and network requirements described in About Minimum System Requirements

  • Enable TCP ports 5973, 3022, 7373 intra-node to enable communication with the database cluster.

  • Ensure you have permission to run the sudo command on the node where you launch the migration tool.

  • Ensure the kubectl command is available in the node where you are performing the migration. To verify that kubectl is installed, run the following:

    $ sudo gravity enter
    $ kubectl
  • Ensure that you have run the following script on your one-node installation. You must run this script to avoid data loss in your installation.

Performing the Upgrade

  1. Obtain the anypoint-1.6.1-installer.tar.gz archive from your customer success representative.

  2. Use ssh to login to the node of your installation.

  3. Upload the installer to the node of your installation.

  4. Uncompress the installer archive.

    mkdir anypoint-1.6.1
    tar -xzf anypoint-1.6.1-installer.tar.gz -C anypoint-1.6.1
  5. Navigate to the anypoint-1.6.1 directory, then run the upload script.

    cd anypoint-1.6.1
    sudo ./upload

    Depending on your network configuration, this command may take a while to complete. Wait until the command finishes before proceeding to the next step.

Note: If this command fails, it may be due to lack of space in your `/tmp.

  1. Download the installation script and copy it to each node in your cluster.

    1. Download script from the following URL
    2. Copy the script to directory where you ran the ./upload.

  2. Export RBAC bootstrap package.

    From the master node, run the following command:

    ./ export-rbac
  3. Scale down deployments

    From the master node, run the following command:

    ./ scale-down
  4. Initiate the update operation

    From the master node, run the following command:

    ./ start-update

    The output of this command should be similar to the following:

    ./ start-update
    Initiating the update process, setting it in manual mode
    updating anypoint from 1.5.2 to 1.6.1
    update operation (3f097853-64dd-4201-973f-2bb4a686c9ee) has been started
    The update operation has been created in manual mode.
  5. Bootstrap the system update process

    From the master node, run the following command:

    ./ bootstrap-system
  6. Peform the system update

    Perform the following command by logging into each node in your cluster. You must run this command sequentially on each node. Wait until this command completes before running it on the next node.

    ./ update-system
  7. Bootstrap the RBAC configuration in the cluster

    From the master node, run the following command:

    ./ bootstrap-rbac
  8. Determine the name of each of your nodes using the following command:

    sudo gravity enter
    kubectl get nodes
  9. Exit the gravity shell

  10. Drain each of the nodes in your cluster.

    From the master node, run the following command one each node in your cluster. You must pass the nodename for each node.

    ./ drain=<node-name>

    The ouput of this command should be similar to the following:

    ./ drain=
    Draining node
    node "" cordoned
    WARNING: Ignoring DaemonSet-managed pods: cassandra-p4mjy, stolon-keeper-d2get, gravity-site-tgme5, kube-dns-v18-41u28, log-forwarder-ujp6d; Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: bandwagon; Deleting pods with local storage: bandwagon-mulesoft-install-35afd2-ingx2, gravity-site-tgme5, monitoring-app-install-39664d-l7xo4, pithos-app-install-95fa7b-58flh, site-app-post-install-916df9-03pol, stolon-app-install-5480c4-v6n81
    pod "exchange-api-db-migration-q8itn" evicted
    pod "site-app-post-install-916df9-03pol" evicted
    pod "pithos-app-install-95fa7b-58flh" evicted

    Before continuing, ensure that all pods are in running or pending state. No pod should be in crashloopbackoff or terminating state.

  11. Make each of the nodes in your cluster is schedulable.

    From the master node, run the following command for each node in your cluster. You must pass the nodename for each node.

    ./ uncordon=<node-name>
  12. Check the status of your cluster.

    kubectl get pods

    Verify that all of the pods in your cluster are running. Wait until all pods are running before continuing to the next procedure.

  13. Fix the LDAP config directory permissions

    ./ fix-ldap
  14. Initiate the application update

    ./ update-app
  15. Finalize and complete the update operation

    ./ finalize-update
  16. If you are running a load balancer in your installation, update the health check on the load balancer.

    You must enable port 10248 for the load balancer health check.

We use cookies to make interactions with our websites and services easy and meaningful, to better understand how they are used and to tailor advertising. You can read more and make your cookie choices here. By continuing to use this site you are giving us your consent to do this.