Troubleshoot Service Issues

When you attempt to run a service, you might encounter error messages due to issues with the deployment, such as problems with a deployment port, or load balancer issues:

Deployment Is Not Ready

Your deployment might not be ready or your deployment might encounter errors after you complete your installation and attempt to run a service.


This issue can be caused due to various reasons.


To diagnose the problem:

  1. Check the status of deployments:

    $ kubectl -n namespace get deployments

    The ready column shows M/N (M < N).

  2. Check the status of the affected pods:

    $ kubectl -n namespace get pods


To resolve this issue:

  1. Perform the following actions, as appropriate to the results you obtained during diagnosis:

    • ErrImagePull or ImagePullBackOff

      The image cannot be pulled. Ensure that the image value in the deployment spec or pod references a valid repository or image name.

    • Pending

      Ensure that the pod has the required resources. Either edit the deployment to request lower CPU and memory, or scale the cluster.

    • CrashLoopBackOff

      Request a description of the pod to get more information about the failure:

      $ kubectl -n namespace describe pod pod-name

  2. In the events section of the description output, check if any logs are available describing the status of the deployment: live or ready.

    1. Verify that the liveness and readiness probes for the deployment are correctly configured and match an existing endpoint in the application.

    2. Extend the initial delay or periods, and manually check the responses and logs in the pod: $ kubectl -n namespace logs pod-name

Image Ports Do Not Match Deployment Ports


Your requests to specific ports can fail due to various reasons.


To diagnose the problem, run the following command:

$ kubectl -n namespace get deployments

The ready column shows N/N (100% ready). However, requests to the declared ports fail with an application-unrelated reason, such as "upstream connect error” or “disconnect or reset before headers”.


To resolve this issue:

  1. Verify that the request is targeting one of the deployment ports.

  2. Verify that the deployment port is exposed in the Dockerfile of the image.

Service Is not Exposed as Load Balancer

When you create a service, you must expose it if you want to access it from outside.

Exposing the service provides an externally accessible IP address that sends traffic to the correct port on your cluster nodes, provided that your cluster runs in a supported environment and is configured with the correct cloud load balancer provider package.

If you failed to expose the service, any request to the IP address of the service goes into an unresponsive state.


This issue occurs if you did not exposing the service as load balancer.


Diagnose the issue with the service: $ kubectl -n namespace get service

In the results, the Type column displays a value other than "LoadBalancer" and the External-IP column displays the value "<none>".


To resolve this issue, verify the service specifications. You must configure the service specification as a load balancer to render it publicly available.

If the service is not meant to be public, the endpoint must be used from within the cluster or must be accessed from outside the cluster using the following command:

$ kubectl port-forward or kubectl proxy

Was this article helpful?

💙 Thanks for your feedback!

Edit on GitHub
Give us your feedback!
We want to build the best documentation experience for you!
Help us improve with your feedback.
Take the survey!