Immediate Fix: Checking Pod Health and Endpoints
A 502 Bad Gateway error in Kubernetes almost always means the Ingress controller (like NGINX) cannot communicate with your backend service. The first step is to verify if your application pods are actually running and ready to receive traffic.
# Check if pods are running and ready
kubectl get pods -n your-namespace
# Check if the service has active endpoints
kubectl get endpoints your-service-name -n your-namespace
If the “ENDPOINTS” column is empty or shows <none>, the Ingress has nowhere to send the traffic. This usually happens if the pod’s readiness probe is failing or if the service selector doesn’t match the pod labels.
| Status | Meaning | Action |
|---|---|---|
| No Endpoints | Service cannot find pods. | Check labels and selectors. |
| Pod Unready | Readiness probe failed. | Check application logs. |
| Connection Refused | App not listening on port. | Verify containerPort settings. |
Technical Explanation
The 502 error is an HTTP status code indicating that one server on the internet received an invalid response from another server. In a Kubernetes context, the Ingress controller acts as a proxy.
When a request hits the Ingress, it looks up the Service, finds the Pod IP addresses via Endpoints, and tries to forward the request. If the Pod is crashing, restarting, or simply not listening on the port defined in the Service, the Ingress controller returns a 502.
Common technical triggers include mismatched targetPorts in your Service YAML or application boot-up delays where the Ingress attempts to connect before the app is fully ready.

Alternative Methods
1. Inspect Ingress Controller Logs
If the pods seem fine, the issue might be within the Ingress controller itself. You can check the NGINX or Traefik logs to see the specific error it encounters when trying to connect to the upstream.
# Get logs from the NGINX Ingress controller pod
kubectl logs -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx
2. Verify Port Configuration
Ensure that the service.spec.ports.targetPort matches the containerPort defined in your Deployment. A common mistake is sending traffic to port 80 when the application is actually listening on 8080.
3. Increase Proxy Timeouts
Sometimes the backend takes too long to respond, causing the controller to close the connection prematurely. You can add annotations to your Ingress resource to increase the timeout limits.
# Example NGINX Ingress annotations
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "60"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
nginx.ingress.kubernetes.io/proxy-send-timeout: "60"
4. Check Network Policies
If you are using a NetworkPolicy, ensure that the Ingress controller is allowed to communicate with the pods in your application namespace. An overly restrictive policy will drop packets, resulting in a gateway error.