Killer Shell CKAD:Interactive Scenarios for Kubernetes Application Developers
These scenarios can be used standalone for CKAD preparation or learning Kubernetes in general
-------contents--------
This playground will always have the same version as currently in the Linux Foundation Exam.
kubectl version
or
kubectl version --short
kubectl cluster-info --dump
How to setup Vim for the K8s exams
Persist Vim settings in .vimrc
We look at important Vim settings if you like to work with YAML during the K8s exams.
Settings
First create or open (if already exists) file .vimrc :
vim ~/.vimrc
Now enter (in insert-mode activated with i) the following lines:
set expandtab
set tabstop=2
set shiftwidth=2
Save and close the file by pressing Esc
followed by :x
and Enter
.
Explanation
Whenever you open Vim now as the current user, these settings will be used.
If you ssh onto a different server, these settings will not be transferred.
Settings explained:
expandtab: use spaces for tab
tabstop: amount of spaces used for tab
shiftwidth: amount of spaces used during indentation
How to connect to and work on other hosts
Create file on a different host
During the exam you might need to connect to other hosts using ssh
Create a new empty file /root/node01 on host node01 .
Tip
ssh node01
Solution
ssh node01
touch /root/node01
exit
Work with different kubectl contexts
View all and current context
During the exam you’ll be provided with a command you need to run before every question to switch into the correct kubectl context
A kubectl context contains connection information to a Kubernetes cluster. Different kubectl contexts can connect to different Kubernetes clusters, or to the same cluster but using different users or different default namespaces.
List all available kubectl contexts and write the output to /root/contexts
.
Tip
k config -h
Solution
k config get-contexts
k config get-contexts > /root/contexts
We see three contexts all pointing to the same cluster.
Context kubernetes-admin@kubernetes
will connect to the default
Namespace.
Context purple
will connect to the purple
Namespace.
Context yellow
will connect to the yellow
Namespace.
Switch context purple
Switch to context purple and list all Pods.
Solution
k config use-context purple
k get pod
Create a Pod with Resource Requests and Limits
Create a new Namespace limit .
In that Namespace create a Pod namedresource-checker
of imagehttpd:alpine
.
The container should be namedmy-container
.
It should request30m
CPU and be limited to300m
CPU.
It should request30Mi
memory and be limited to30Mi
memory.
Solution
First we create the Namespace yaml:
k create ns limit
Then we generate a Pod yaml:
k -n limit run resource-checker --image=httpd:alpine -oyaml --dry-run=client > pod.yaml
Next we adjust it to the requirements:
apiVersion: v1
kind: Pod
metadata:
labels:
run: resource-checker
name: resource-checker
namespace: limit
spec:
containers:
- image: httpd:alpine
name: my-container
resources:
requests:
memory: "30Mi"
cpu: "30m"
limits:
memory: "30Mi"
cpu: "300m"
dnsPolicy: ClusterFirst
restartPolicy: Always
Create ConfigMaps
- Create a ConfigMap named trauerweide with content tree=trauerweide
- Create the ConfigMap stored in existing file /root/cm.yaml
Tip
# create a new ConfigMap
kubectl create cm trauerweide -h
# create a ConfigMap from file
kubectl create -f ...
Solution
kubectl create cm trauerweide --from-literal tree=trauerweide
kubectl -f /root/cm.yaml create
Access ConfigMaps in Pod
- Create a Pod named
pod1
of imagenginx:alpine
- Make key
tree
of ConfigMaptrauerweide
available as environment variableTREE1
- Mount all keys of ConfigMap
birke
as volume. The files should be available under/etc/birke/*
- Test env+volume access in the running Pod
Solution
apiVersion: v1
kind: Pod
metadata:
name: pod1
spec:
volumes:
- name: birke
configMap:
name: birke
containers:
- image: nginx:alpine
name: pod1
volumeMounts:
- name: birke
mountPath: /etc/birke
env:
- name: TREE1
valueFrom:
configMapKeyRef:
name: trauerweide
key: tree
Verify
kubectl exec pod1 -- env | grep "TREE1=trauerweide"
kubectl exec pod1 -- cat /etc/birke/tree
kubectl exec pod1 -- cat /etc/birke/level
kubectl exec pod1 -- cat /etc/birke/department
Create a Deployment with a ReadinessProbe
A ReadinessProbe will be executed periodically all the time, not just during start or until a Pod is ready
Create a Deployment named space-alien-welcome-message-generator
of image httpd:alpine
with one replica.
It should’ve a ReadinessProbe which executes the command stat /tmp/ready
. This means once the file exists the Pod should be ready.
The initialDelaySeconds
should be 10
and periodSeconds
should be 5
.
Create the Deployment and observe that the Pod won’t get ready.
Probes
ReadinessProbes and LivenessProbes will be executed periodically all the time.
If a StartupProbe is defined, ReadinessProbes and LivenessProbes won’t be executed until the StartupProbe succeeds.
ReadinessProbe fails*: Pod won’t be marked Ready and won’t receive any traffic
LivenessProbe fails*: The container inside the Pod will be restarted
StartupProbe fails*: The container inside the Pod will be restarted
*fails: fails more times than configured with failureThreshold
Tip
Here is an example ReadinessProbe snippet:
readinessProbe:
exec:
command:
- ls
- /tmp
initialDelaySeconds: 5
periodSeconds: 5
Solution
First we generate a Deployment yaml:
k create deploy space-alien-welcome-message-generator --image=httpd:alpine -oyaml --dry-run=client > deploy.yaml
Then we adjust it to the requirements:
apiVersion: apps/v1
kind: Deployment
metadata:
name: space-alien-welcome-message-generator
spec:
replicas: 1
selector:
matchLabels:
app: space-alien-welcome-message-generator
strategy: {}
template:
metadata:
labels:
app: space-alien-welcome-message-generator
spec:
containers:
- image: httpd:alpine
name: httpd
resources: {}
readinessProbe:
exec:
command:
- stat
- /tmp/ready
initialDelaySeconds: 10
periodSeconds: 5
Observe
We see 0/1
in the READY column.
controlplane $ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
space-alien-welcome-message-generator 0/1 1 0 40s
We can also run k describe deploy
to see logs about failed ReadinessProbes
Make the Deployment Ready
Make the Deployment ready.
Exec into the Pod and create file
/tmp/ready
.Observe that the Pod is ready.
Solution
k get pod # use pod name
k exec space-alien-welcome-message-generator-5c945bc5f9-m9nkb -- touch /tmp/ready
Observe
We see 1/1
in the READY column.
controlplane $ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
space-alien-welcome-message-generator 1/1 1 0 3m53s
We can also run k describe deploy
to see logs about failed ReadinessProbes
Build a container from scratch and run it
Create a new file /root/Dockerfile
to build a container image from. It should:
Build the image and tag it as pinger
.
Run the image (create a container) named my-ping
.
You can use
docker
orpodman
for this scenario. Just stick to your choice throughout all steps.
Info
Dockerfile: List of commands from which an Image can be build
Image: Binary file which includes all data/requirements to be run as a Container
Container: Running instance of an Image
Registry: Place where we can push/pull Images to/from
Solution
For most situations you can just use the commands
docker
or podman interchangeably
Create the /root/Dockerfile
:
FROM bash
CMD ["ping", "killercoda.com"]
Build the image:
podman build -t pinger .
podman image ls
Run the image:
podman run --name my-ping pinger
NOTE:
⭕️podman run --name my-ping pinger
❌podman run pinger --name my-ping
Perform a rolling rollout of an application
Application “wonderful” is running in default
Namespace.
You can call the app using curl wonderful:30080
.
The app has a Deployment with image httpd:alpine
, but should be switched over to nginx:alpine
.
Set the maxSurge
to 50%
and the maxUnavailable
to 0%
. Then perform a rolling update.
Wait till the rolling update has succeeded.
Explanation
Why can you call curl wonderful:30080
and it works?
There is a NodePort Service wonderful
which listens on port 30080
. It has the Pods of Deployment of app “wonderful” as selector.
We can reach the NodePort Service via the K8s Node IP:
curl 172.30.1.2:30080
And because of an entry in /etc/hosts
we can call:
curl wonderful:30080
Tip
Changing the image of a running Deployment will start a rollout.
Rolling rollout is the default strategy.
Solution
k edit deploy wonderful-v1
Then we adjust it to the requirements:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: wonderful
name: wonderful-v1
namespace: default
...
spec:
...
replicas: 3
selector:
matchLabels:
app: wonderful
strategy:
rollingUpdate:
maxSurge: 50%
maxUnavailable: 0%
type: RollingUpdate
template:
metadata:
labels:
app: wonderful
spec:
containers:
- image: nginx:alpine
...
Perform a Green-Blue rollout of an application
Application “wonderful” is running in default
Namespace.
You can call the app using curl wonderful:30080
.
The application YAML is available at /wonderful/init.yaml
.
The app has a Deployment with image httpd:alpine
, but should be switched over to nginx:alpine
.
The switch should happen instantly. Meaning that from the moment of rollout, all new requests should hit the new image.
Explanation
Why can you call curl wonderful:30080
and it works?
There is a NodePort Service wonderful
which listens on port 30080
. It has the Pods of Deployment of app “wonderful” as selector.
We can reach the NodePort Service via the K8s Node IP:
curl 172.30.1.2:30080
And because of an entry in /etc/hosts we can call:
curl wonderful:30080
Tip
The idea is to have two Deployments running at the same time, one with the old and one with the new image.
But there is only one Service, and it only points to the Pods of one Deployment.
Once we point the Service to the Pods of the new Deployment, all new requests will hit the new image.
Solution
Create a new Deployment and watch these places:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: wonderful
name: wonderful-v2
spec:
replicas: 4
selector:
matchLabels:
app: wonderful
version: v2
template:
metadata:
labels:
app: wonderful
version: v2
spec:
containers:
- image: nginx:alpine
name: nginx
Wait till all Pods are running, then switch the Service selector:
apiVersion: v1
kind: Service
metadata:
labels:
app: wonderful
name: wonderful
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30080
selector:
app: wonderful
version: v2
type: NodePort
Confirm with curl wonderful:30080
, all requests should hit the new image.
Finally scale down the original Deployment:
kubectl scale deploy wonderful-v1 --replicas 0
Perform a Canary rollout of an application
Application “wonderful” is running in default
Namespace.
You can call the app using curl wonderful:30080
.
The application YAML is available at /wonderful/init.yaml
.
The app has a Deployment with image httpd:alpine
, but should be switched over to nginx:alpine
.
The switch should not happen fast or automatically, but using the Canary approach:
For this create a new Deployment wonderful-v2
which uses image nginx:alpine
.
The total amount of Pods of both Deployments combined should be 10.
Explanation
Why can you call curl wonderful:30080
and it works?
There is a NodePort Service wonderful
which listens on port 30080
. It has the Pods of Deployment of app “wonderful” as selector.
We can reach the NodePort Service via the K8s Node IP:
curl 172.30.1.2:30080
And because of an entry in /etc/hosts we can call:
curl wonderful:30080
Tip 1
The service wonderful
points to Pods with label app: wonderful
.
Tip 2
The replica count of the Deployments will define how often the Pods will serve a request.
Solution
Reduce the replicas of the old deployment to 8:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: wonderful
name: wonderful-v1
spec:
replicas: 8
selector:
matchLabels:
app: wonderful
template:
metadata:
labels:
app: wonderful
spec:
containers:
- image: httpd:alpine
name: httpd
Create a new Deployment and watch these places:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: wonderful
name: wonderful-v2
spec:
replicas: 2
selector:
matchLabels:
app: wonderful
template:
metadata:
labels:
app: wonderful
spec:
containers:
- image: nginx:alpine
name: nginx
Verify
Call the app with curl wonderful:30080
, around 20% should hit the new image.
Get a list of all CRDs and custom objects
Write the list of all installed CRDs into /root/crds
.
Write the list of all DbBackup objects into /root/db-backups
.
Tip 1
k get crd
Tip 2
k get db-backups
Tip 3
k get db-backups -A
Solution
Resources from CRDs can be namespaced, so here we have to look for objects in all Namespaces.
k get crd > /root/crds
k get db-backups -A > /root/db-backups
Install a new CRD and create an object
The team worked really hard for months on a new Shopping-Items CRD which is currently in beta.
Install it from /code/crd.yaml
.
Then create a ShoppingItem object named bananas
in Namespace default
. The dueDate
should be tomorrow
and the description
should be buy yellow ones
.
Tip 1
You install CRD yaml just like any other resources.
Tip 2
You can use this template:
apiVersion: "beta.killercoda.com/v1"
kind: # TODO fill out
metadata:
name: bananas
spec:
# TODO fill out
Solution
k -f /code/crd.yaml apply
Now you’ll be able to see the new CRD:
k get crd | grep shopping
And we can create a new object of it:
apiVersion: "beta.killercoda.com/v1"
kind: ShoppingItem
metadata:
name: bananas
spec:
description: buy yellow ones
dueDate: tomorrow
k get shopping-item
k get shopping-item bananas -oyaml
Find all Helm releases in the cluster
Write the list of all Helm releases in the cluster into /root/releases
.
Info
Helm Chart: Kubernetes YAML template-files combined into a single package, Values allow customisation
Helm Release: Installed instance of a Chart
Helm Values: Allow to customise the YAML template-files in a Chart when creating a Release
Tip 1
helm ls
Tip 2
helm ls -A
Solution
Helm charts can be installed in any Namespaces, so here we have to look in all.
helm ls -A > /root/releases
Delete a Helm release
Delete the Helm release apiserver .
Tip
helm uninstall -h
Solution
helm ls -A
helm -n team-yellow uninstall apiserver
Install a Helm chart
Install the Helm chart nginx-stable/nginx-ingress
into Namespace team-yellow
.
The release should’ve the name devserver
.
Tip
helm install -h
Solution
helm -n team-yellow install devserver nginx-stable/nginx-ingress
Create Services for existing Deployments
There are two existing Deployments in Namespace world
which should be made accessible via an Ingress.
First: create ClusterIP Services for both Deployments for port 80
. The Services should have the same name as the Deployments.
Tip
k expose deploy -h
Solution
k -n world expose deploy europe --port 80
k -n world expose deploy asia --port 80
Create Ingress for existing Services
The Nginx Ingress Controller has been installed.
Create a new Ingress resource called world
for domain name world.universe.mine
. The domain points to the K8s Node IP via /etc/hosts
.
The Ingress resource should have two routes pointing to the existing Services:
http://world.universe.mine:30080/europe/
and
http://world.universe.mine:30080/asia/
Explanation
Check the NodePort Service for the Nginx Ingress Controller to see the ports
k -n ingress-nginx get svc ingress-nginx-controller
We can reach the NodePort Service via the K8s Node IP:
curl http://172.30.1.2:30080
And because of the entry in /etc/hosts we can call
curl http://world.universe.mine:30080
Tip 1
The Ingress resources needs to be created in the same Namespace as the applications.
Tip 2
Find out the ingressClassName with:
k get ingressclass
Tip 3
You can work with this template
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: world
namespace: world
annotations:
# this annotation removes the need for a trailing slash when calling urls
# but it is not necessary for solving this scenario
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx # k get ingressclass
rules:
- host: "world.universe.mine"
...
Solution
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: world
namespace: world
annotations:
# this annotation removes the need for a trailing slash when calling urls
# but it is not necessary for solving this scenario
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx # k get ingressclass
rules:
- host: "world.universe.mine"
http:
paths:
- path: /europe
pathType: Prefix
backend:
service:
name: europe
port:
number: 80
- path: /asia
pathType: Prefix
backend:
service:
name: asia
port:
number: 80
Create new NPs
There are existing Pods in Namespace space1
and space2
.
We need a new NetworkPolicy named np
that restricts all Pods in Namespace space1
to only have outgoing traffic to Pods in Namespace space2
. Incoming traffic not affected.
The NetworkPolicy should still allow outgoing DNS traffic on port 53
TCP and UDP.
Template
You can use this template:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np
namespace: space1
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
# TODO add the namespaceSelector section here
Tip
For learning you can check the NetworkPolicy Editor
The namespaceSelector
from NPs works with Namespace labels, so first we check existing labels for Namespaces
k get ns --show-labels
Solution
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np
namespace: space1
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: space2
Verify
# these should work
k -n space1 exec app1-0 -- curl -m 1 microservice1.space2.svc.cluster.local
k -n space1 exec app1-0 -- curl -m 1 microservice2.space2.svc.cluster.local
k -n space1 exec app1-0 -- nslookup tester.default.svc.cluster.local
k -n space1 exec app1-0 -- nslookup killercoda.com
# these should not work
k -n space1 exec app1-0 -- curl -m 1 tester.default.svc.cluster.local
k -n space1 exec app1-0 -- curl -m 1 killercoda.com
List all Admission Controller Plugins
Write all Admission Controller Plugins, which are enabled in the kube-apiserver manifest, into /root/admission-plugins
.
Tip
We can check the kube-apiserver manifest:
cat /etc/kubernetes/manifests/kube-apiserver.yaml
Or the running process:
ps aux | grep kube-apiserver
Solution
We filter for the argument:
cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep admission-plugins
And then create a new file in the requested location /root/admission-plugins
with content:
NodeRestriction
LimitRanger
Priority
Enable Admission Controller Plugin
Enable the Admission Controller Plugin MutatingAdmissionWebhook
.
It can take a few minutes for the apiserver container to restart after changing the manifest. You can watch using
watch crictl ps
.
Tip 1
The apiserver manifest is under /etc/kubernetes/manifests/kube-apiserver.yaml
.
Always make a backup! But into a different directory than /etc/kubernetes/manifests
.
Tip 2
The argument we’re looking for is --enable-admission-plugins
.
Solution
We need to edit the apiserver manifest:
# ALWAYS make a backup
cp /etc/kubernetes/manifests/kube-apiserver.yaml ~/kube-apiserver.yaml
vim /etc/kubernetes/manifests/kube-apiserver.yaml
And add the new plugin to the list:
--enable-admission-plugins=NodeRestriction,LimitRanger,Priority,MutatingAdmissionWebhook
Now we wait till the apiserver restarted using watch crictl ps
.
For speed, we could also move the kube-apiserver.yaml
manifest out of the directory, wait till the process ended and then move it back it.
Disable Admission Controller Plugin
Delete Namespace space1
.
Delete Namespace default
(throws error)
Disable the Admission Controller Plugin NamespaceLifecycle
. It’s not recommended to do this at all, we just do this for showing the effect.
It can take a few minutes for the apiserver container to restart after changing the manifest. You can watch using
watch crictl ps
.
Now delete Namespace default
.
Tip
The apiserver manifest also has argument --disable-admission-plugins
.
Solution
First we delete Namespace space1
:
k delete ns space1
And same for default` . But it won’t work because an admission plugin prevents the deletion!
k delete ns default # error
We need to edit the apiserver manifest:
# ALWAYS make a backup
cp /etc/kubernetes/manifests/kube-apiserver.yaml ~/kube-apiserver.yaml
vim /etc/kubernetes/manifests/kube-apiserver.yaml
And add a new argument:
--disable-admission-plugins=NamespaceLifecycle
Now we wait till the apiserver restarted using watch crictl ps
. Then we can delete the default one:
k delete ns default
If we wait a bit we should see that it gets automatically created again:
k get ns
Get Api Version
Create file /root/versions
with three lines. Each line containing only one number from the installed K8s server version:
Tip
k version
Solution
We can use kubectl to show the installed K8s version:
k version
k version --short
The K8s version is written as Major.Minor.Patch
.
Write each number in one line into /root/versions
.
Identify Api Group
Write the Api Group of Deployments into /root/group
.
Tip
k explain
Solution
We can use kubectl to list information about a resource type:
k explain deploy
This will show VERSION: apps/v1
.
The version is displayed as VERSION: {group}/{version}
.
echo apps > /root/group
Convert Deprecated Resource
There is a CronJob file at /apps/cronjob.yaml
which uses a deprecated Api version.
Update the file to use the non deprecated one.
Tip
When creating a resource which uses a deprecated version, kubectl will show a warning.
We can also check the .
Solution
Creating the resource will show a warning and a note which version to use instead:
k -f /apps/cronjob.yaml create
We can also check the .
The file should be changed to:
apiVersion: batch/v1
kind: CronJob
metadata:
name: backup
spec:
schedule: "5 4 * * *"
jobTemplate:
spec:
template:
spec:
...
因篇幅问题不能全部显示,请点此查看更多更全内容