Kind¶
Important
If you already have Docker-Desktop (Mac, Windows) or Docker Engine (Linux) installed and running, skip this step for the demo and go directly to Deploying applications with kubectl
Kind is an amazing tool for running test clusters locally as it runs in a container which makes it lightweight and easy to run throw-away clusters for testing purposes.
- It can be used to deploy Local K8s cluster or for CI
- Support ingress / LB (with some tuning)
- Support deployment of multiple clusters / versions
- Supports deployment of single or multi node clusters
For more information, check out https://kind.sigs.k8s.io/.
Install¶
Usage¶
To create a K8s cluster with Kind
use the command:
Create a first kind cluster dev
¶
In this guide we will run 2 clusters side by side dev
and stg
.
In order to have consitency over kind
cluster configuration, create cluster by specifing a config file.
and settings such k8s version
and etc,
Here's the config yaml file:
Then create the cluster from this file:
cat > kind-dev.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"service-account-issuer": "kubernetes.default.svc"
"service-account-signing-key-file": "/etc/kubernetes/pki/sa.key"
networking:
# the default CNI will not be installed if you enable it, usefull to install Cilium !
disableDefaultCNI: false
nodes:
- role: control-plane
image: kindest/node:v1.24.4@sha256:adfaebada924a26c2c9308edd53c6e33b3d4e453782c0063dc0028bdebaddf98
- role: worker
image: kindest/node:v1.24.4@sha256:adfaebada924a26c2c9308edd53c6e33b3d4e453782c0063dc0028bdebaddf98
extraPortMappings:
- containerPort: 80
hostPort: 3080
listenAddress: "0.0.0.0"
- containerPort: 443
hostPort: 3443
listenAddress: "0.0.0.0"
EOF
Here's the regular logs when starting a Kind cluster:
enabling experimental podman provider
Creating cluster "dev" ...
â Ensuring node image (kindest/node:v1.24.4) đŧ
â Preparing nodes đĻ đĻ
â Writing configuration đ
â Starting control-plane đšī¸
â Installing CNI đ
â Installing StorageClass đž
â Joining worker nodes đ
Set kubectl context to "kind-dev"
You can check that everything is working. Each K8s node is actually a running container
:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6993dbdbf82b docker.io/kindest/node@sha256:adfaebada924a26c2c9308edd53c6e33b3d4e453782c0063dc0028bdebaddf98 3 minutes ago Up 3 minutes ago 127.0.0.1:55210->6443/tcp dev-control-plane
dd461d2b9d4a docker.io/kindest/node@sha256:adfaebada924a26c2c9308edd53c6e33b3d4e453782c0063dc0028bdebaddf98 3 minutes ago Up 3 minutes ago 0.0.0.0:3080->80/tcp, 0.0.0.0:3443->443/tcp dev-worker
- See cluster up and running:
NAME STATUS ROLES AGE VERSION
dev-control-plane Ready control-plane 11h v1.24.4
dev-worker Ready <none> 11h v1.24.4
Note
You can see that our cluster has control-plane
node and worker
node.
- Verify k8s cluster status:
Kubernetes control plane is running at https://127.0.0.1:56141
CoreDNS is running at https://127.0.0.1:56141/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
- Check system Pods are up and running:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6d4b75cb6d-lqcs6 1/1 Running 0 4m6s
kube-system coredns-6d4b75cb6d-xgbxk 1/1 Running 0 4m6s
kube-system etcd-dev-control-plane 1/1 Running 0 4m18s
kube-system kindnet-tjfzj 1/1 Running 0 4m6s
kube-system kindnet-vc66d 1/1 Running 0 4m1s
kube-system kube-apiserver-dev-control-plane 1/1 Running 0 4m18s
kube-system kube-controller-manager-dev-control-plane 1/1 Running 0 4m18s
kube-system kube-proxy-5kp6d 1/1 Running 0 4m6s
kube-system kube-proxy-dfczd 1/1 Running 0 4m1s
kube-system kube-scheduler-dev-control-plane 1/1 Running 0 4m18s
local-path-storage local-path-provisioner-6b84c5c67f-csxg6 1/1 Running 0 4m6s
Create a second kind cluster stg¶
cat > kind-stg.yaml << EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
kubeadmConfigPatches:
- |
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
name: config
apiServer:
extraArgs:
"service-account-issuer": "kubernetes.default.svc"
"service-account-signing-key-file": "/etc/kubernetes/pki/sa.key"
networking:
# the default CNI will not be installed if you enable it, usefull to install Cilium !
disableDefaultCNI: false
nodes:
- role: control-plane
image: kindest/node:v1.25.2@sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace
extraPortMappings:
- containerPort: 80
hostPort: 4080
listenAddress: "0.0.0.0"
- containerPort: 443
hostPort: 4443
listenAddress: "0.0.0.0"
EOF
- List clusters with
kind get
:
Result
Two K8s clusters dev
and stg
has been created
Note
After a reboot, podman
will be disabled. To recover podman and kind
containers re-run following steps:
Next¶
Now that the Kind
clusters are created, continue by deploying some applications.