kubernetesseverity: workaround
CrashLoopBackOff

Kubernetes Pod status: CrashLoopBackOff

Pod keeps crashing and restarting

85% fixable~20 mindifficulty: advanced

Verified against Kubernetes 1.32 docs (tasks/debug/debug-application), Stack Overflow #50472960 (top answer, 700 upvotes) · Updated April 2026

> quick_fix

Your pod's main process keeps exiting. Run `kubectl logs <pod> --previous` to see the last crashed container's output — the real error is there. Common causes: wrong command, missing env var, OOMKilled, failed liveness probe.

# See the last crashed container's logs (not the current one that's being restarted)
kubectl logs <pod-name> --previous

# Describe the pod to see restart reason and events
kubectl describe pod <pod-name>

# Check resource limits
kubectl get pod <pod-name> -o yaml | grep -A 5 resources

What causes this error

Kubernetes marks a pod CrashLoopBackOff when its main container has restarted multiple times and the restart back-off has grown. The kubelet intentionally slows down restart attempts (10s → 20s → 40s ... max 5min) to avoid thrashing the node. The crash reason is always in the previous container's logs or the pod events.

> advertisementAdSense placeholder

How to fix it

  1. 01

    step 1

    Get the previous crash logs

    The current container is too fresh to have logs. Use --previous to see what the last container printed before exiting.

    kubectl logs <pod-name> --previous
    kubectl logs <pod-name> -c <container-name> --previous  # if multi-container
  2. 02

    step 2

    Describe the pod for events

    `kubectl describe pod` shows the events log at the bottom — look for "Back-off restarting failed container", "OOMKilled", "Liveness probe failed", or "CreateContainerConfigError".

    kubectl describe pod <pod-name> | tail -40
  3. 03

    step 3

    Check resource limits vs usage

    If the pod was OOMKilled, the last exit code is 137. Raise memory limit, or profile your app to reduce memory usage.

  4. 04

    step 4

    Verify the command and args

    If your Dockerfile has CMD ["./start.sh"] but the image doesn't actually contain start.sh, the container exits immediately with a non-zero code. Check with `kubectl exec -it <pod> -- /bin/sh` — if exec fails because the pod is crashing, swap CMD for sleep 3600 temporarily to debug.

  5. 05

    step 5

    Check liveness probes

    An aggressive liveness probe (e.g., initialDelaySeconds too low, or wrong port) can kill a healthy container. Temporarily remove the probe to test.

Frequently asked questions

What is a Kubernetes CrashLoopBackOff?

A pod status indicating the main container keeps exiting and kubelet is applying exponential back-off between restart attempts.

What does exit code 137 mean?

Out of memory — the pod was killed by the kubelet OOM killer because it exceeded its memory limit. Raise `resources.limits.memory` or reduce app memory usage.

What does exit code 1 mean?

A generic application error — the app exited via `sys.exit(1)` or threw an unhandled exception. Check the application logs from --previous.

disclosure:Errordex runs AdSense and has zero affiliate links or sponsored content. Every fix is manually verified against official sources listed in the “sources” sidebar. If a fix here didn’t work for you, please email so we can update the page.