Contact Us 1-800-596-4880

Troubleshoot Service Issues

When you attempt to run a service, you might encounter error messages due to issues with the deployment, such as problems with a deployment port, or load balancer issues:

Deployment Is Not Ready

Your deployment might not be ready or your deployment might encounter errors after you complete your installation and attempt to run a service.

Cause

This issue can be caused due to various reasons.

Diagnose

To diagnose the problem:

  1. Check the status of deployments:

    $ kubectl -n namespace get deployments

    The ready column shows M/N (M < N).

  2. Check the status of the affected pods:

    $ kubectl -n namespace get pods

Solution

To resolve this issue:

  1. Perform the following actions, as appropriate to the results you obtained during diagnosis:

    • ErrImagePull or ImagePullBackOff

      The image cannot be pulled. Ensure that the image value in the deployment spec or pod references a valid repository or image name.

    • Pending

      Ensure that the pod has the required resources. Either edit the deployment to request lower CPU and memory, or scale the cluster.

    • CrashLoopBackOff

      Request a description of the pod to get more information about the failure:

      $ kubectl -n namespace describe pod pod-name

  2. In the events section of the description output, check if any logs are available describing the status of the deployment: live or ready.

    1. Verify that the liveness and readiness probes for the deployment are correctly configured and match an existing endpoint in the application.

    2. Extend the initial delay or periods, and manually check the responses and logs in the pod: $ kubectl -n namespace logs pod-name

Image Ports Do Not Match Deployment Ports

Cause

Your requests to specific ports can fail due to various reasons.

Diagnose

To diagnose the problem, run the following command:

$ kubectl -n namespace get deployments

The ready column shows N/N (100% ready). However, requests to the declared ports fail with an application-unrelated reason, such as "upstream connect error” or “disconnect or reset before headers”.

Solution

To resolve this issue:

  1. Verify that the request is targeting one of the deployment ports.

  2. Verify that the deployment port is exposed in the Dockerfile of the image.

Service Is not Exposed as Load Balancer

When you create a service, you must expose it if you want to access it from outside.

Exposing the service provides an externally accessible IP address that sends traffic to the correct port on your cluster nodes, provided that your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package.

If you failed to expose the service, any request to the IP address of the service goes into an unresponsive state.

Cause

This issue occurs if you did not exposing the service as load balancer.

Diagnose

Diagnose the issue with the service: $ kubectl -n namespace get service

In the results, the Type column displays a value other than "LoadBalancer" and the External-IP column displays the value "<none>".

Solution

To resolve this issue, verify the service specifications. You must configure the service specification as a load balancer to render it publicly available.

If the service is not meant to be public, the endpoint must be used from within the cluster or must be accessed from outside the cluster using the following command:

$ kubectl port-forward or kubectl proxy