Understanding Kubernetes Pods in Completed State

Understanding Kubernetes Pods in Completed State

Understanding Kubernetes Pods in Completed State

In Kubernetes, a pod enters the Completed state when all its containers finish running successfully and exit with a status code of 0. This is the normal, expected behavior for short-lived Jobs, CronJobs, or batch processes that are designed to run once and then stop.

You can identify this state using:

kubectl get pods

NAME                 READY   STATUS      RESTARTS   AGE
data-backup-job      0/1     Completed   0          5m

When Completed State is Normal

Expected scenarios:

  • Batch jobs: Data processing, ETL pipelines, report generation
  • Backup tasks: Scheduled backups that run and exit
  • CronJobs: Periodic tasks that execute and complete
  • One-time scripts: Migration scripts, data transformations
  • Container lifecycle: PreStop hooks that complete successfully

For these workloads, Completed status indicates successful execution.

When Completed State is Unexpected

Problematic scenarios:

  • Web servers or APIs: Should stay running continuously
  • Long-running applications: Services that should never exit
  • Misconfigured commands: Container exits immediately due to wrong command
  • Premature termination: Application exits before completing work

Understanding the Difference

Normal Completion (Jobs)

apiVersion: batch/v1
kind: Job
metadata:
  name: data-processor
spec:
  template:
    spec:
      containers:
      - name: processor
        image: data-processor:latest
        command: ["/bin/sh", "-c", "process-data.sh && exit 0"]
      restartPolicy: Never

This job is expected to complete.

Unexpected Completion (Deployments)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-server
spec:
  template:
    spec:
      containers:
      - name: server
        image: nginx:latest
        # If this completes, something is wrong!

This deployment should never complete.

Diagnosing Completed Pods

Step 1: Check Pod Type

kubectl get pod <pod-name> -o jsonpath='{.metadata.ownerReferences[*].kind}'
# If owner is Job or CronJob, Completed is normal
# If owner is Deployment or ReplicaSet, investigate why it completed

Step 2: Check Exit Code

kubectl describe pod <pod-name> | grep "Exit Code:"
# Exit code 0 = successful completion
# Any other exit code = failure

Step 3: Review Logs

kubectl logs <pod-name>
# Check if job completed its intended work

Step 4: Check Container Command

kubectl get pod <pod-name> -o yaml | grep -A 5 "command:"
# Verify the command is correct for the workload type

Common Issues and Solutions

Issue 1: Deployment Pods Completing

Problem: Deployment pods that should run continuously are completing.

Solution:

  • Ensure main process doesn't exit
  • Use commands like nginx -g 'daemon off;' for daemons
  • Check if application is designed to run as a service
  • Verify startup commands are correct

Issue 2: Jobs Not Completing

Problem: Job pods that should complete are stuck in other states.

Solution:

  • Check if job is waiting for dependencies
  • Verify resource availability
  • Review job timeout settings
  • Check for errors preventing completion

Issue 3: Premature Completion

Problem: Container exits before completing work.

Solution:

  • Review container logs for early exits
  • Check application configuration
  • Verify dependencies are available
  • Add proper error handling

Managing Completed Pods

View Completed Pods

# All completed pods
kubectl get pods --field-selector=status.phase=Succeeded

# In specific namespace
kubectl get pods -n <namespace> --field-selector=status.phase=Succeeded

Clean Up Completed Pods

# Delete completed job pods (Job controller will keep history)
kubectl delete pod <completed-pod-name>

# Clean up old completed jobs
kubectl delete jobs --field-selector status.successful=1

Retain Completed Pods

Jobs can retain completed pods:

apiVersion: batch/v1
kind: Job
spec:
  backoffLimit: 3
  ttlSecondsAfterFinished: 86400  # Keep for 24 hours

Best Practices

  1. Use appropriate workload types: Jobs for one-time tasks, Deployments for services
  2. Set TTL for completed jobs: Automatically clean up after completion
  3. Monitor completion times: Track job execution duration
  4. Handle exit codes properly: Ensure successful completion returns 0
  5. Review logs: Always check logs to confirm successful completion
  6. Set appropriate restart policies: Never for Jobs, Always for Deployments

Related Resources

Conclusion

Completed state is normal for Jobs and batch processes, but unexpected for long-running applications. Always check the pod's owner (Job vs Deployment) to determine if completion is expected. For Deployments, investigate why pods are completing when they should remain running.

Remember: Completed is good for Jobs, but bad for Deployments!