Symptoms & Diagnosis
The OOMKilled (Exit Code 137) error in Kubernetes occurs when a container is terminated because it reached its defined memory limit or the node ran out of available RAM. In Linux, the “Out of Memory Killer” (OOM Killer) monitors processes and kills those consuming excessive resources to protect system stability.
The primary symptom is a Pod transitioning into a CrashLoopBackOff state. You can identify this by running the following command:
kubectl get pods
To confirm the specific error, inspect the pod details. Look for the Last State section in the output of the describe command:
kubectl describe pod [POD_NAME]
| Field | Value | Description |
|---|---|---|
| State | Terminated | The container has stopped running. |
| Reason | OOMKilled | The process was killed for exceeding memory limits. |
| Exit Code | 137 | 128 (Signal) + 9 (SIGKILL) = 137. |

Troubleshooting Guide
Fixing error 137 requires determining if the issue is caused by a restrictive configuration or a memory leak within the application. Follow these steps to isolate the cause.
Step 1: Check Resource Limits
Examine the YAML configuration of your deployment. If the limits.memory value is set too low for the application’s actual needs, the container will be killed as soon as it tries to allocate more RAM than allowed.
resources:
limits:
memory: "512Mi"
requests:
memory: "256Mi"
Step 2: Monitor Real-time Usage
Use the Metrics Server to see how much memory the pod is actually consuming before it crashes. This helps differentiate between a sudden spike and a slow climb (leak).
kubectl top pod [POD_NAME] --containers
Step 3: Analyze Node Pressure
Sometimes, a pod is killed because the Node itself is under “MemoryPressure,” even if the container hasn’t reached its specific limit. Check the node status to ensure the underlying infrastructure is healthy.
kubectl describe node [NODE_NAME]
Prevention
Preventing OOMKilled errors involves a combination of better resource planning and application optimization. Proper observability is key to staying ahead of memory exhaustion.
Right-Sizing Resources
Set requests to the typical amount of memory the application uses during normal operation, and set limits slightly higher to accommodate bursts. Avoid setting limits to “0” or leaving them undefined, as this allows a single pod to monopolize node resources.
Implement Vertical Pod Autoscaler (VPA)
The VPA can automatically adjust your memory requests and limits based on historical usage. This eliminates the guesswork involved in manual configuration and ensures pods have enough breathing room.
Application Level Profiling
If memory usage continues to climb over time, your application likely has a memory leak. Use language-specific profilers (like Go pprof, Python memory_profiler, or Java VisualVM) to identify objects that are not being garbage collected.
Limit Shared Memory Usage
If your application uses /dev/shm, remember that Kubernetes defaults this to 64MB. If your app requires more shared memory, you must mount an emptyDir with a Medium: Memory to avoid OOM issues.