Kubernetes : CKAD weekly challenge
Afin de préparer le passage du CKAD : Certified Kubernetes Application Developer il est bon de se promener sur le web afin de récolter les conseils précieux des personnes qui ont passé cet examen avant nous. C’est dans ce cadre que j’ai décidé de faire le Kubernetes CKAD weekly challenge, une Série de 13 problèmes pratiques à résoudre. A moins d’une semaine de l’examen pratique c’est le truc qui vous met tout de suite en condition.
Rules !
- Be fast, avoid creating yaml manually from scratch
- Use only kubernetes.io/docs for help.
Images docker utiles
Ces images sont utiles pour créer rapidement des service ou des Pod de débogage. Les connaître permet d’aller vite dans la résolution d’un challenge.
| Nom | Utilité | Exemple |
|---|---|---|
bash |
Léger, équipé de Bash | k run --image=bash my-bash -- /bin/bash -c "date && sleep 10s" |
busybox |
Léger, inclus wget. | k run -it --rm --image=busybox -- /bin/sh |
nginx |
Serveur web port 80 | k create deployment nginx-app --image=nginx |
Liens utiles
- Kubernetes CKAD weekly challenge by Kim Wuestkamp
- CKAD-exercises by @dgkanatsios
- Enable Network Policy on minikube
- The Mega Kubernetes Learning Resources List
- CKAD resources by @veggiemonk
Task 00 : Configuration
- Go to kubernetes.io/docs > Install tools > Set up kubectl : create alias with bash completion
echo 'source <(kubectl completion bash)' >>~/.bashrc
echo 'alias k="kubectl"' >> .bashrc
echo 'complete -F __start_kubectl k' >>~/.bashrc
echo 'alias kx="kubectl explain"' >>~/.bashrc
echo 'alias kgp="kubectl get pods"' >>~/.bashrc
echo 'alias kdp="kubectl delete pod"' >>~/.bashrc
echo 'alias kgs="kubectl get svc"' >>~/.bashrc
echo 'alias kn="kubectl config set-context --current --namespace "' >>~/.bashrc
- Set current context with specific namespace
k config set-context --current --namespace <namespace>
Challenge 01 : Creating Pods
- Créer un Pod et spécifier un commande de démarrage
- Enregiistrer la config dans un fichier
- Se connecter en SSH au conteneur du pod
- Ajouter un label au pod
- Supprimer instantanément le pod
ATTENTION : bash -c indispensable pour démmarer avec cette commande !
k run -h
k run --image=bash --restart=Never --dry-run=client -o yaml mypod -- bash -c "hostname > /tmp/hostname && sleep 1d" > 01-pod.yaml
k apply -f 01-pod.yaml
k exec -it mypod -- bash
k exec mypod -- cat /tmp/hostname
k label -f 01-pod.yaml my-label=test
k replace -f 01-pod.yaml --force #if label added by file edit
k get pods --show-labels
k delete pod mypod --grace-period=0 --force
Challenge 02 : Namespaces, Deployments, Pod and Services
- Create a namespace
- Create de deployment of 3 repplicas
- Output and edit config in file via DryRun
- Expose deploment using Service
- Create pod and execute command
- Access to service from a pod in same namespace
- Access service from another namespace via DNS
Doc-Help : concepts > services-networking > service/#service-resource
k create ns 02-namespace-a
k config set-context --current --namsespace 02-namespace-a
k create deployment nginx-deployment --image=nginx --dry-run=client -o yaml > 02-deployment.yaml
k apply -f 02-deployment.yaml
k scale deployment nginx-deployment --replicas=3
k set resources deployment nginx-deployment --limits=cpu=200m,memory=512Mi
k expose deployment nginx-deployment --port 4444 --target-port 80
k expose deployment nginx-deployment --port 4444 --target-port 80 --name my-service
k run --image=cosmintitei/bash-curl --restart=Never pod1 -- bash -c "curl http://nginx-deployment:4444"
k run --image=cosmintitei/bash-curl --restart=Never --command=true pod2 -- sleep 1d
k exec -it pod2 -- bash
k exec pod2 -- bash -c "curl http://nginx-deployment:4444"
k exec pod2 -- bash -c 'curl http://$NGINX_DEPLOYMENT_SERVICE_HOST:$NGINX_DEPLOYMENT_SERVICE_PORT'
k create ns 02-namespace-b
k run --image=cosmintitei/bash-curl --restart=Never --namespace 02-namespace-b pod3 -- sleep 1d
k exec pod3 --namespace 02-namespace-b -- bash -c "curl http://my-service.k8s-challenge-2-a:4444"
k exec pod3 --namespace 02-namespace-b -- bash -c "curl http://my-service.k8s-challenge-2-a.svc.cluster.local:4444"
Challenge 03 : CronJobs and Volumes
- Create node hostPath PersistenVolume
- Create PVC resolved by previous PV
- Create cronjob and with successfulJobsHistoryLimit and parallelism
- Check volume file updated by pod
- Check nuber of succefful job history
Doc-Help : tasks > configure-pod-container > configure-persistent-volume-storage
k create ns k8s-challenge-3
k config set-context --current --namespace k8s-challenge-3
# Use doc tasks snippet to create PV + PCV confif files (use storageClassName="" or storageClassName=manual)
k apply -f pv-manual.yaml
k-apply -f pv-claim.yaml
k explain PersistentVolume.spec
k create cj cronjob1 --image=bash --schedule="*/1 * * * *" -o yaml --dry-run=client -- bash -c "hostname >> /tmp/vol/storage" > cronjob1.yaml
# Edit Cronjob1.yaml file and add volume + volumeMount / Set spec.successfulJobsHistoryLimit=4 / Set jobTemplate.spec.parallelism=2
k get jobs,pods
Challenge 04 : Deployment, Rollouts and Rollbacks
- Create deployment
- Scala deployment
- Set deployement image to a new value
- Verify deployement image changes
- Rollout a deploymeny
Doc-Help : concepts > workloads > controllers > deployment
k create ns one
k config set-contex --current --namespace one
k explain deployment.spec.template.spec.containers
k create deployment nginx --image=nginx:1.14.2 --dry-run=client -o yaml > deployment.yaml
k apply -f deployment.yam
k scale deploy nginx1 --replicas=15
k patch deployments.apps nginx -p '{"spec": {"template": {"spec": {"containers": [{"name": "nginx", "image": "nginx:1.15.10"}]}}}}'
k patch deployments.apps nginx -p '{"spec": {"template": {"spec": {"containers": [{"name": "nginx", "image": "nginx:1.15.666"}]}}}}'
k set image deploy/nginx nginx=nginx:1.15.10
k get pods -o yaml | grep 1.15.10 | wc -l
k rollout -h
k rollout history deployment/nginx
k rollout undo deployment/nginx
Challenge 05 : Secrets and ConfigMaps
- Create secret from lietterals
- Create configMap from files
- Use Secret in pod as volume
Doc-Help : tasks > configure-pod-container > configure-pod-configmap
k create -h
k creat secret generic -h
k create secret generic secret1 --from-literal=password=12345678 --dry-run=client -o yaml > secret1.yaml
k applt -f secret1.yaml
# Copy past pod1.yaml from k8s docs/tasks : Inject data to pod via secret (update secret volume)
k apply -f pod1.yaml
k exec pod1 -- bash -c "cat /tmp/secret1/password && echo"
mkdir drinks; echo ipa > drinks/beer; echo red > drinks/wine; echo sparkling > drinks/water
k create configmap configmap1 --from-file=./drinks
# Edit pod1.yam and envFrom->ConfigMapRef->configmap1 (cf docs/tasks : configure pod)
k replace -f pod1.yaml --force --grace-period=0
k exec pod1 -- env
Challenge 06 : NetworkPolicy
Requirement : Network Policy feature enabled.
Doc-Help : tasks > administer-cluster > declare-network-policy
Ex: Authorize trafic ingress to pod labelled app=secured only from pod labbelled access=true
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: access-nginx
spec:
podSelector:
matchLabels:
app: secured
ingress:
- from:
- podSelector:
matchLabels:
access: true
Test the policy
# Create service on wiche the policy will be applicable
k run api --image=nginx --labels app=secured
k expose pod api --port 80 --name api-svc
k run --image=busybox --restart=Never testpod -- sleep 1d
k exec testpod -- wget --spider api-svc #BLOCKED
k run --image=busybox --restart=Never --labels=access=true wget --spider api-svc #PASSED
Challenge 07 : Service Migration
- Create deployment
- Create ExternalName Service
- Test acces to external service
Doc-Help : concepts > services-networking > service#externalname
k create deployment
k create svc -h
k create svc externalname -h
k create svc externalname my-svc --external-name www.google.com
k run --image=byrnedo/alpine-curl --command pod1 -- sleep 1d
k exec pod1 -- sh -c "ping -c 4 my-svc"
k exec pod1 -- sh -c "curl --header "" -c 4 my-svc"
# Test using curl
k exec pod1 -- sh -c "ping -c 4 mysvc"
k exec pod1 -- sh -c "curl -h"
k exec pod1 -- sh -c "curl --header 'Host: www.google.com' mysvc"
# Test using busybox/wget
k run --restart=Never --image=busybox --rm pod2 -- sh -c "wget --spider --header 'Host: www.google.com' mysvc"
Challenge 08 : User Authorization RBAC
- Create a ClusterRole and ClusterRoleBinding
- Limit authorisations to specific verbs / obejcts
Doc-Help : reference > access-authn-authz > rbac#default-roles-and-role-bindings
k create clusterrole -h
k create clusterrole secretmanager --verb=* --resource=secret
# Bind clusterrole to "secret@test.com" user
k create clusterrolebinding secretmanager-rb --clusterrole=secretmanager --user=secret@test.com
# Test
kubectl auth can-i create secret --as secret@test.com
k auth can-i * secrets --as secret@test.com
# Auth for specific pod
k create clusterrole podmgr --verb=* --resource=pods --resource-name=compute
k create clusterrolebinding podmgr-rb --clusterrole=podmgr --user=deploy@test.com
k auth can-i * secrets --as deploy@test.com #no
k auth can-i create pod/compute --as deploy@test.com #yes
# Authorize read permission to secrets
k create clusterrole secret-reader --verb=get,list,watch --resource=secrets --resource-name=compute-secret
k create clusterrolebinding secret-reader-rb --clusterrole=secret-reader --user=deploy@test.com
k auth can-i get secret/compute-secret --as deploy@test.com #yes
k auth can-i delete secrets/compute-secret --as deploy@test.com # no
Challenge 09 : Logging Sidecar
- Export deployment to json
- Edit Yaml to add sidecar container
- Test display logs from sidecar container
- Bonus get pod name from app label
Init : kubectl create -f https://raw.githubusercontent.com/wuestkamp/k8s-challenges/master/9/scenario.yaml
Doc-Help : docs > reference > kubectl > jsonpath
# Chek log from existing pod volume
k exec nginx-54d8ff86dc-tthzg -- tail -f /var/log/nginx/access.log
# Export deployement to edit
k get deploy nginx -o yaml > deployment.yaml
# Sidecar container snippet to add
# - image: bash
# name: logging
# args: ["bash", "-c", "tail -f /var/log/nginx/acces.log"]
# volumeMounts:
# - name: logs
# mountPath: /var/log/nginx
k apply -f deployment.yaml
# Test (after acces nginx page Ex: curl $(minikube -p ckad ip):1234
k logs nginx-788965584-gnftv -c logging
# Bonus
POD=$(k get pod -l run=nginx -o jsonpath='{.items[0].metadata.name}'
k logs $POD -c logging
Challenge 10 : Deployment Hacking
- Create a deployment of image nginx
- Scale deployment to 3 replicas
- Check deployment events
- Try to add manually pods to deployment set
k create deployment super-app --image=nginx
k scale deployment super-app --replicas=3
kubectl get events | tail -n 10
# Show pod labels (replicaset template-hash is added to pod)
k get pod --show-labels # app=super-app,pod-template-hash=747cdb6c98
# Try create pod with these labels : #FAILURE : POD DELETED BY REPLICA CONTROLLER
k run nginx --image=nginx --labels="app=super-app,pod-template-hash=747cdb6c98"
Challenge 11 : Security Contexts
- Add Security Context to pod
- Add Security Context to pod container
- Add Security Context to run container as root
kx pod.spec.securityContext
# Edit Pod.yaml to add Pod global securityContext
k apply -f pod.yaml
k exec bash -c bash1 -- touch /tmp/share/file
k exec bash -c bash2 -- ls -lh /tmp/share/file
k exec bash -c bash1 -- whoami # ftp
k exec bash -c bash2 -- whoami # ftp
# Edit Pod.yaml to add Container bash1 securityContext => root
k apply -f pod-securityContext-2.yaml
k exec bash -c bash1 -- whoami # root
k exec bash -c bash2 -- whoami # ftp
k exec bash -c bash1 -- touch /tmp/share/file
k exec bash -c bash2 -- ls -lh /tmp/share/file
k exec bash -c bash2 -- rm /tmp/share/file # FILE DELETED !!!
# Check parent dir premission
k exec bash -c bash2 -- ls -la /tmp/share # => drwxrwxrwx 2 root root
# Edit Pod.yaml to add set /tmp/share dir permission
k exec bash -c bash1 -- chmod og-w -R /tmp/share
k exec bash -c bash1 -- touch /tmp/share/file
k exec bash -c bash2 -- ls -lh /tmp/share/file
k exec bash -c bash2 -- rm /tmp/share/file # PERMISSION DENIED :)
# Check parent dir premission
k exec bash -c bash2 -- ls -la /tmp/share # => drwxr-xr-x 2 root root
Add Pod wide security context : pod.spec.securityContext.runAsUser: 21 :
# kubernetes sample configuration of pod wide SecurityContext to run containers as specific user
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bash
name: bash
spec:
# Start Pod securityContext
securityContext:
runAsUser: 21
# End Pod securityContext
volumes:
- name: share
emptyDir: {}
containers:
- command:
- /bin/sh
- -c
- sleep 1d
image: bash
name: bash1
volumeMounts:
- name: share
mountPath: /tmp/share
- command:
- /bin/sh
- -c
- sleep 1d
image: bash
name: bash2
volumeMounts:
- name: share
mountPath: /tmp/share
restartPolicy: Never
Bonus : Set permission for shared dir on initContainer
Edit previous pod.yaml to add :
#...
spec:
initContainers:
- name: permission
image: bash
args: ["sh", "-c", "chmod og-w -R /tmp/share"]
volumeMounts:
- name: share
mountPath: /tmp/share
securityContext:
runAsUser: 0
#...
k delete -f pod.yaml
k apply -f pod-securityContext-3.yaml
k exec bash -c bash1 -- touch /tmp/share/file
k exec bash -c bash2 -- ls -lh /tmp/share/file
k exec bash -c bash2 -- rm /tmp/share/file # PERMISSION DENIED :)
Challenge 12 : Various Environment Variables
- Create secret from file
- Create pod that consume secret as env vars
cat <<EOF > env.txt
CREDENTIAL_001=-bQ(ETLPGE[uT?6C;ed
CREDENTIAL_002=C_;SU@ev7yg.8m6hNqS
CREDENTIAL_003=ZA#$$-Ml6et&4?pKdvy
CREDENTIAL_004=QlIc3$5*+SKsw==9=p{
CREDENTIAL_005=C_2\a{]XD}1#9BpE[k?
CREDENTIAL_006=9*KD8_w<);ozb:ns;JC
CREDENTIAL_007=C[V$Eb5yQ)c~!..{LRT
SETTING_USE_SEC=true
SETTING_ALLOW_ANON=true
SETTING_PREVENT_ADMIN_LOGIN=true
EOF
k create secret -h
k create secret generic app-secret --from-env-file env.txt
k describe secrets app-secret
# Create pod
k run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml
k explain pod.spec.containers.envFrom
# Edit nginx pod to add ennFrom secret entry
# ...
#spec:
# containers:
# - name: app
# envFrom:
# - secretRef:
# name: app-secret
# ...
k apply -f pod.yaml
# Test pod env
k exec nginx -- env
k exec nginx -- sh -c echo '$CREDENTIAL_002'
Challenge 13 : ReplicaSet without Downtime
- Add label to existing Pod
- Create a ReplicaSet to handle existing Pod
Doc-Help : concepts > workloads > controllers > replicaset
# Create pod
cat <<EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-calc
spec:
containers:
- command:
- sh
- -c
- echo "important calculation"; sleep 1d
image: nginx
name: pod-calc
EOF
# Label the pod
k label pod pod-calc app=calc
# 1. Create a ReplicaSet of 2 replicas with the same template spec as pod-calc (with lablel app=calc).
# 2. Ensure to set ReplicaSet selector.matchLabels to app=calc, so it will handle existing Pod as of its replicas
cat <<EOF | k apply -f -
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: calculation-rs
spec:
replicas: 2
selector:
matchLabels:
app: calc
template:
metadata:
labels:
app: calc
spec:
containers:
- command:
- sh
- -c
- echo "important calculation"; sleep 1d
image: nginx
name: pod-calc
EOF
# Check
k get rs
Challenge 14 : LivenessProb
- Create my-app deployment of nginx in namespace zeus
- Add liveness-prob on http port 80 with 10s of delay and 15s period
k create ns zeus
k config set-context --current --namespace zeus
k create deployment my-app --image=nginx --dry-run=client -o yaml > my-app-deployment.yaml
k explain k explain pod.spec.containers
k explain k explain pod.spec.containers.livenessProbe
Edit my-app-deployment.yaml and add following linessProb snippet at .spec.template.spec.containers[0]
livenessProbe:
initialDelaySeconds: 10
periodSeconds: 10
httpGet:
path: /
port: 80
scheme: HTTP
Apply configuration and test
k apply -f my-app-deployment.yaml
# Get IP of one pod of my-app deployment exemple: 172.17.0.2
k get pod -o wide
k run tmp --rm -ti --restart=Never --image=busybox -- sh -c "wget -O- 172.17.0.2"
Challenge 14 : Horizontal po Autoscaler
- Create de 5 pods Deployemùent of nginx
- Autoscale this deployment using yaml
k create deployment nginx --image=nginx
k scale deployment nginx --replicas=5
k autoscale -h
k autoscale deployment nginx --min=5 --max=10 --cpu-percent=80 --dry-run -o yaml > autoscale.yaml
k appply -f autoscale.yaml
Content of autoscale.yaml :
# Example of a Kuberntetes HorizontalPodAutoscaler resource for existing Deployment
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
creationTimestamp: null
name: nginx
spec:
maxReplicas: 10
minReplicas: 5
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx
targetCPUUtilizationPercentage: 80