CKA Practice tests
Pods
How many pods Exist on the cluster in the current namespace (default)
Create a new pod with the nginx image.
What images are the pods created with
This assumes it's not created with a deployment
What nodes are the pods placed on
Then look at NODE
How many containers are part of the pod webapp?
What images are used in the new webapp pod?
Then look at the Containers.*.Image
What is the state of the container agentx in the pod webapp?
Container was in ImagePullBackOff
but the answer was error
Why do you think the container agentx in pod webapp is in error?
There was no image found on Docker hub, hence ImagePullBackOff
What does the READY column in the output of the kubectl get pods command indicate?
Running containers in pod/ total number of containers in pod
Delete the webapp Pod.
Create a new pod with the name redis and with the image redis123.
kubectl run redis --image=redis123 --dry-run=client -o yaml > redis.yaml
kubectl apply -f redis.yaml
Now change the image on this pod to redis.
Replica Sets
Assume namespace is default
unless told otherwise
How many PODs exist on the system?
How many ReplicaSets exist on the system?
How about now? How many ReplicaSets do you see?
Answer: 1
How many PODs are DESIRED in the new-replica-set?
Then view the number under DESIRED
What is the image used to create the pods in the new-replica-set
?
View Pod Template. Image
How many PODs are READY in the new-replica-set?
View READY
Why do you think the PODs are not ready?
The image doesnt exist
Delete any one of the 4 PODs.
How many PODs exist now?
a: 4
Because it recreated the pod we deleted
Why are there still 4 PODs, even after you deleted one?
ReplicaSet ensures that the desired number of PODs always run
Create a ReplicaSet using the replicaset-definition-1.yaml file located at /root/.
Fix the issue in the replicaset-definition-2.yaml file and create a ReplicaSet using it.
Delete the two newly created ReplicaSets - replicaset-1 and replicaset-2
Fix the original replica set new-replica-set to use the correct busybox image.
unset KUBE_EDITOR
export KUBE_EDITOR=nano
kubectl edit replicasets/new-replica-set
# Edit image
kubectl delete pods/<names of pods>
Scale the ReplicaSet to 5 PODs.
Use kubectl scale command or edit the replicaset using kubectl edit replicaset.
Now scale the ReplicaSet down to 2 PODs.
Deployments
Assume namespace is default
unless told otherwise
How many PODs exist on the system?
How many ReplicaSets exist on the system?
rs
is a standing for replicaset, which we can get fromkubectl api-resources
How many Deployments exist on the system?
How many Deployments exist on the system now?
How many ReplicaSets exist on the system now?
rs
is a standing for replicaset, which we can get fromkubectl api-resources
How many PODs exist on the system now?
Out of all the existing PODs, how many are ready?
# Result from command
controlplane ~ ➜ k get po
NAME READY STATUS RESTARTS AGE
frontend-deployment-7fbf4f5cd9-bgwdl 0/1 ImagePullBackOff 0 82s
frontend-deployment-7fbf4f5cd9-hrn69 0/1 ImagePullBackOff 0 82s
frontend-deployment-7fbf4f5cd9-7z2ml 0/1 ImagePullBackOff 0 82s
frontend-deployment-7fbf4f5cd9-xdbxg 0/1 ImagePullBackOff 0 82s
A: None, 0
What is the image used to create the pods in the new deployment?
Locate Pod Template.*.Image
Why do you think the deployment is not ready?
A: ImagePullBackOff
- No image found with that name on the Public Docker Registry
Create a new Deployment using the deployment-definition-1.yaml file located at /root/.
How to figure out the issue and save time
Best way to find out what the issue is, is to apply it and see what it comes back with.
We get the error back:
Error from server (BadRequest): error when creating "deployment-definition-1.yaml": deployment in version "v1" cannot be handled as a Deployment: no kind "deployment" is registered for version "apps/v1" in scheme "k8s.io/apimachinery@v1.26.0-k3s1/pkg/runtime/scheme.go:100"
We can check that the apiVersion
is correct with the below
We can tell it's correct, so it needs to be the kind
requires an Uppercase. kind
uses PascalCase (first letter of the word is capitalized)
Create a new Deployment with the below attributes using your own deployment definition file.
kubectl create deployment httpd-frontend --image="httpd:2.4-alpine" --replicas=3 --dry-run=client -o yaml > deployment.yaml
kubectl apply -f deployment.yaml
Namespaces
How many Namespaces exist on the system?
Count the namespaces
How many pods exist in the research namespace?
Create a POD in the finance namespace.
Which namespace has the blue pod in it?
What DNS name should the Blue application use to access the database db-service in its own namespace - marketing?
A: db-service
We know this because they are both in the same namespace, so we don't have to use cross namespace URL's
Services
Assume namespace is default
unless told otherwise
How many Services exist on the system?
What is the type of the default kubernetes service?
# Results
controlplane ~ ➜ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m10s
A: ClusterIP
What is the targetPort configured on the kubernetes service?
View spec.ports.1.targetPort
How many labels are configured on the kubernetes service?
View metada.labels
How many Endpoints are attached on the kubernetes service?
A: 1
We know this because in the below yaml, there is one entry below spec.ports.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-01-14T17:42:24Z"
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "193"
uid: 4570bc97-ad41-4080-931f-08a0fbb72e59
spec:
clusterIP: 10.43.0.1
clusterIPs:
- 10.43.0.1
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: https
port: 443
protocol: TCP
targetPort: 6443
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
How many Deployments exist on the system now?
What is the image used to create the pods in the deployment?
Imperative Commands
Deploy a pod named nginx-pod using the nginx:alpine image
Deploy a redis pod using the redis:alpine image with the labels set to tier=db.
Either use imperative commands to create the pod with the labels. Or else use imperative commands to generate the pod definition file, then add the labels before creating the pod using the file.
Create a service redis-service to expose the redis application within the cluster on port 6379
Create a deployment named webapp using the image kodekloud/webapp-color with 3 replicas.
Create a new pod called custom-nginx using the nginx image and expose it on container port 8080
Create a new namespace called dev-ns
Create a new deployment called redis-deploy in the dev-ns namespace with the redis image. It should have 2 replicas.
Create a pod called httpd using the image httpd:alpine in the default namespace. Next, create a service of type ClusterIP by the same name (httpd). The target port for the service should be 80.
I went about this creating a pod and then a service, which is not correct.
The correct way would be to run the below
Labels and Selectors
How many PODs are in the finance business unit (bu)?
How many objects are in the prod environment including PODs, ReplicaSets and any other objects?
Identify the POD which is part of the prod environment, the finance BU and of frontend tier?
A ReplicaSet definition file is given replicaset-definition-1.yaml. Try to create the replicaset. There is an issue with the file. Try to fix it.
Taints and Tolerations
Do any taints exist on node01
Create a taint on node01 with key of spray
, value of mortein
and effect of NoSchedule
Create a new pod with the nginx image and pod name as mosquito.
What is the state of the POD?
A: Pending
Why do you think the pod is in a pending state?
A: Pod cant tolerate the taint mortein
Create another pod named bee with the nginx image, which has a toleration set to the taint mortein.
Struggled
This is a question I struggled on
Edit the file and add the below
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: bee
name: bee
spec:
containers:
- image: nginx
name: bee
resources: {}
tolerations:
- key: "spray"
operator: "Equal"
value: "mortein"
effect: "NoSchedule"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Notice the bee pod was scheduled on node node01 despite the taint.
Yes
controlplane ~ ➜ k get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bee 1/1 Running 0 3m16s 10.244.1.2 node01 <none> <none>
mosquito 0/1 Pending 0 7m44s <none> <none> <none> <none>
Do you see any taints on controlplane
node?
A: Yes, NoSchedule
controlplane ~ ➜ k describe node/controlplane | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Remove the taint on controlplane
, which currently has the taint effect of NoSchedule.
Node Affinity
How many Labels exist on node node01?
What is the value set to the label key beta.kubernetes.io/arch
on node01?
Apply a label color=blue
to node node01
Create a new deployment named blue
with the nginx
image and 3 replicas.
Which nodes can the pods for the blue deployment be placed on?
Set Node Affinity to the deployment to place the pods on node01 only.
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
spec:
replicas: 3
selector:
matchLabels:
run: nginx
template:
metadata:
labels:
run: nginx
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In
values:
- blue
Which nodes are the pods placed on now?
Create a new deployment named red
with the nginx
image and 2
replicas, and ensure it gets placed on the controlplane
node only.
Use the label key - node-role.kubernetes.io/control-plane
- which is already set on the controlplane
node
We want to use the exists operator as the labels doesn't have a value
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=controlplane
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: red
name: red
spec:
replicas: 2
selector:
matchLabels:
app: red
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: red
spec:
containers:
- image: nginx
name: nginx
resources: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
status: {}
Want to make this site better? Open a PR or help fund hosting costs