The Uptime Engineer
👋 Hi, I am Yoshik Karnawat
10 essential fundamentals you need to run reliable applications in Kubernetes today. Don’t miss on the resource links below.
You've deployed your first pod to Kubernetes.
It worked.
Now what?
You Google "Kubernetes best practices" and get overwhelmed.
Service mesh. Admission controllers. Custom operators. Policy engines.
Here's the truth:
You don't need any of that yet.
You need working apps, basic monitoring, and the ability to debug when things break.
Here are the 10 fundamentals that will save you from most disasters.
1. Set Resource Limits (Start Simple)
Don't run containers without resource requests and limits.
Start with basic values:
resources:
requests:
memory: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
cpu: "500m"Why this matters:
Without limits, one container can starve others
Without requests, Kubernetes can't schedule pods properly
Start conservative, then adjust based on actual usage
Use kubectl top pods to see actual resource usage, then set limits 20-30% higher than what you observe.
2. Never Run Single Pods
Always use Deployments, not standalone Pods.
Bad:
kubectl run my-app --image=my-image Good:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 2 # Start with at least 2Why:
If a pod crashes, Kubernetes won't restart it automatically.
Deployments give you rolling updates, rollbacks, and self-healing.
You get zero-downtime deploys for free.
Start with replicas: 2 for everything. It forces you to build stateless apps from day one.
3. Set Up Basic Health Checks
Add liveness and readiness probes to every container.
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5 Why:
Liveness probe: restarts containers that are stuck
Readiness probe: stops sending traffic to pods that aren't ready
Prevents cascading failures during deployments
Even returning 200 OK from /health is better than nothing.
4. Use Namespaces to Organize
Don't put everything in default.
Create namespaces for different environments:
kubectl create namespace dev
kubectl create namespace staging
kubectl create namespace prodWhy:
Easier to set resource quotas per environment
Prevents accidental changes to production
Makes
kubectl get podsoutput actually readable
Use namespace-specific contexts so you don't accidentally deploy to prod.
5. Start Monitoring Early
Don't wait until production to add monitoring.
Begin with these 4 metrics:
Pod restarts: Are containers crashing?
CPU/Memory usage: Are you hitting limits?
Pod pending time: Can Kubernetes schedule your pods?
Failed deployments: Did your rollout actually work?
Tools to start with:
Kubernetes Dashboard (built-in, easy to set up)
kubectl top nodesandkubectl top pods(already installed)Metrics Server (lightweight, official component)
Set up a simple Slack alert for pod crashes first. One alert is better than zero.
6. Test Your Deployments Locally First
Use Minikube or kind (Kubernetes in Docker) before touching production.
# Install kind
kind create cluster
# Test your manifests
kubectl apply -f deployment.yaml
# Watch it work (or break)
kubectl get pods --watchWhy:
Break things in safety
Learn kubectl commands without fear
Catch YAML typos before they hit prod
Keep a test cluster running locally. Use it to experiment with new features.
7. Use Small Container Images
Don't use default Ubuntu/CentOS base images.
Bad:
FROM ubuntu:latest # 80MB+ Good:
FROM alpine:3.18 # 5MB Why:
Faster pulls = faster deploys
Less storage costs
Fewer security vulnerabilities
Alpine images are 10x smaller than base images.
Start with alpine and add only what you need.
8. Learn These kubectl Commands First
Master the basics before advanced features:
# See what's running
kubectl get pods
kubectl get deployments
# Debug problems
kubectl logs <pod-name>
kubectl describe pod <pod-name>
# Check resource usage
kubectl top nodes
kubectl top pods
# Quick fixes
kubectl delete pod <pod-name> # Forces restart
kubectl rollout restart deployment/<name>
# Access running containers
kubectl exec -it <pod-name> -- /bin/shAdd --watch to any get command to see live updates.
9. Version Control Everything
Every Kubernetes manifest should be in Git.
my-app/
├── deployment.yaml
├── service.yaml
├── configmap.yaml
└── README.mdWhy:
Track what changed and when
Easy rollbacks when things break
Share configs with your team
Foundation for CI/CD later
Start with one repo for all your manifests. Organize by app, not by resource type.
10. Read Error Messages Carefully
Kubernetes error messages tell you exactly what's wrong.
Common errors and what they mean:
ImagePullBackOff: Wrong image name or no registry accessCrashLoopBackOff: Your app keeps crashing on startupPending: Not enough resources to schedule the podErrImagePull: Can't download the container image
Use kubectl describe pod <name> to see the full error message. The answer is usually there.
Start Small. Master the Basics.
You don't need service mesh on day one.
You don't need admission controllers.
You don't need custom operators.
You need:
Working deployments
Basic health checks
Resource limits that prevent cascading failures
Logs and metrics to debug when things break
Master these 10 practices first.
Everything else builds on top.
The engineers running Kubernetes at Airbnb, Netflix, and Spotify all started here.
So should you.
Until next time,
Yoshik K.
Helpful Resources
Kubernetes Failure Stories
https://codeberg.org/hjacobs/kubernetes-failure-storiesHow Kubernetes Works
Kubernetes Documentation
