Every Kubernetes Concept Has a Story

22 March 2026

Every Kubernetes Concept Has a Story
A narrative guide to understanding why Kubernetes components exist by looking at the problems they solve.

1. The Ephemeral Workload

The Problem: You run your app as a Pod. It runs your container, but then it crashes and nobody restarts it. It is just gone.

The Solution: You use a Deployment. One pod dies and another comes back. You want 3 running; it keeps 3 running.

2. The Networking Shift

The Problem: Every pod gets a new IP when it restarts. Another service needs to talk to your app, but the IPs keep changing. You cannot hardcode them at scale.

The Solution: You use a Service. One stable IP that always finds your pods using Labels, not IPs. Pods die and come back; the Service does not care.

3. The Cloud Bill Crisis

The Problem: Now you have 10 services and 10 load balancers. Your cloud bill does not care that 6 of them handle almost no traffic.

The Solution: You use Ingress. One load balancer, all services behind it, with smart routing.

  • Note: Ingress is just the rules; you add an Ingress Controller (Nginx, Traefik, AWS ALB) so the rules actually work.

4. The Configuration Mess

The Problem: You hardcode config inside the container. You end up with the wrong database in staging or the wrong API key in production. You have to rebuild the image every time a config changes.

The Solution: You use a ConfigMap. Config lives outside the container and gets injected at runtime. The same image runs in dev, staging, and production.

5. The Security Incident

The Problem: Your database password is now sitting in a ConfigMap unencrypted. Anyone with basic kubectl access can read it. That is a security incident.

The Solution: You use a Secret. Sensitive data is stored separately with its own access controls. Your image never "sees" it in the code.

6. The 2 AM Wake-up Call

The Problem: Some days you have 100 users, some days 10,000. You manually scale to 8 pods during a spike and watch them sit idle all night. You cannot babysit your cluster forever.

The Solution: You use HPA (Horizontal Pod Autoscaler). CPU crosses 70% and pods are added automatically. Traffic drops and they scale back down.

7. The "Pending" Purgatory

The Problem: Your nodes are full and new pods sit in a Pending state. HPA did its job, but your cluster has nowhere to put the new pods.

The Solution: You use Karpenter. Pods stuck in Pending trigger a new node to appear automatically. Load drops and the node is removed. You only pay for what you actually use.

8. The Rogue Resource Hog

The Problem: One pod starts consuming 4GB of memory and nobody told Kubernetes it wasn't supposed to. It starves every other pod on that node and a cascade begins. One rogue pod takes down everything around it.

The Solution: You use Resource Requests and Limits.

  • Requests: The minimum a pod needs to be scheduled.

  • Limits: The hard ceiling that ensures no pod can steal from its neighbors.

Summary Cheat Sheet

The Pain The Kubernetes Solution
Containers won't stay up Deployment
Changing IP addresses Service
Expensive Load Balancers Ingress
Environment specific config ConfigMap
Exposed passwords Secret
Manual scaling HPA
Running out of server space Karpenter / Cluster Autoscaler
One app crashing the whole server Requests & Limits