The pre-configured Cluster API Provider vSphere (CAPV) images use containerd
as their Container Runtime, and to get this to pull images from a non-secure, or self-signed certificate container registry you need to make a few changes to the /etc/containerd/config.toml
file.
These are the steps I followed to get a private Harbor repository working over https with a self-signed certificate, that my Kubernetes clusters did not trust.
If you have already deployed your workload cluster that needs to use this Private Registry, the easiest way to get this working is to manually adjust the file on each of the Kubernetes nodes, and restart the relevant service.
Existing Workload Cluster
SSH (default username is capv
if created with CAPV) to each node in the workload cluster, and run the following (this will replace any existing content of the file).
sudo su
cat > /etc/containerd/config.toml << EOF
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "k8s.gcr.io/pause:3.2"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."REGISTRY-FQDN"]
endpoint = ["https://REGISTRY-FQDN"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."REGISTRY-FQDN".tls]
insecure_skip_verify = true
EOF
sudo systemctl restart containerd
sudo systemctl status containerd
A nicer way is to add the configuration above in to your cluster.yaml
and have it populate the /etc/containerd/config.toml
file whenever you create another cluster with CAPV.
New Workload Cluster
Update your cluster.yaml
to include a files:
section for the KubeadmConfigTemplate
and KubeadmControlPlane
resource types. The files:
section needs to be added in the below locations:
KubeadmConfigTemplate
(worker nodes): Addfiles:
section tospec.template.spec
KubeadmControlPlane
(control plane nodes): Addfiles:
section tospec.kubeadmConfigSpec
Replace REGISTRY-FQDN
with the FQDN or IP address of your registry. Ensure that your endpoint has the correct protocol (http or https).
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: cluster01
namespace: default
spec:
kubeadmConfigSpec:
files:
- path: /etc/containerd/config.toml
content: |
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "k8s.gcr.io/pause:3.2"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."REGISTRY-FQDN"]
endpoint = ["https://REGISTRY-FQDN"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."REGISTRY-FQDN".tls]
insecure_skip_verify = true
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha3
kind: KubeadmConfigTemplate
metadata:
name: cluster01-md-0
namespace: default
spec:
template:
spec:
files:
- path: /etc/containerd/config.toml
content: |
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "k8s.gcr.io/pause:3.2"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."REGISTRY-FQDN"]
endpoint = ["https://REGISTRY-FQDN"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."REGISTRY-FQDN".tls]
insecure_skip_verify = true
Deploy the new cluster from your Cluster API management cluster: kubectl apply -f cluster.yaml
Once the cluster is up, you can validate that the file has the correct content by SSH’ing in to each of your clusters nodes and running:
sudo cat /etc/containerd/config.toml
Create Kubernetes Secret
Within your new workload cluster, you can now docker login REGISTRY-FQDN
, and utilise the stored credentials to create a Kubernetes Secret that can be used within deployments to pull from your private registry.
#Retrieve new workload cluster kubeconfig
export WORKLOAD_CLUSTER_NAME="<CLUSTER-NAME>"
mkdir -p $HOME/$WORKLOAD_CLUSTER_NAME
kubectl get secret $WORKLOAD_CLUSTER_NAME-kubeconfig -o=jsonpath='{.data.value}' | { base64 -d 2>/dev/null || base64 -D; } > $HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig
#Enter a valid username/password for the registry
docker login REGISTRY-FQDN
#Afer successful login, create a kubernetes secret based on the docker config file
kubectl create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson --kubeconfig=$HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig
Deploying from the Private Repo/Registry
Create a deployment using the created secret. This step assumes you have a container image available to pull from your Privary Registry.
The image path could look something like this harbor01.domain.com/myrepo/my-node-app:latest
.
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: node-app
name: node-app
spec:
replicas: 1
selector:
matchLabels:
app: node-app
template:
metadata:
labels:
app: node-app
spec:
containers:
- image: REGISTRY-FQDN/REPO/CONTAINER:TAG
name: node-app
imagePullSecrets:
- name: regcred
Validate your app has been deployed
kubectl get deployment node-app --kubeconfig=$HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig
kubectl get pods -l app=node-app --kubeconfig=$HOME/$WORKLOAD_CLUSTER_NAME/kubeconfig
Snippet from cluster.yaml
including files:
This is a snippet from the full cluster.yaml
that Cluster API generates, but it shows a document that includes files:
.
apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
name: wlc01
namespace: default
spec:
infrastructureTemplate:
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: VSphereMachineTemplate
name: wlc01
kubeadmConfigSpec:
clusterConfiguration:
apiServer:
extraArgs:
cloud-provider: external
controllerManager:
extraArgs:
cloud-provider: external
initConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cloud-provider: external
name: '{{ ds.meta_data.hostname }}'
joinConfiguration:
nodeRegistration:
criSocket: /var/run/containerd/containerd.sock
kubeletExtraArgs:
cloud-provider: external
name: '{{ ds.meta_data.hostname }}'
preKubeadmCommands:
- hostname "{{ ds.meta_data.hostname }}"
- echo "::1 ipv6-localhost ipv6-loopback" >/etc/hosts
- echo "127.0.0.1 localhost" >>/etc/hosts
- echo "127.0.0.1 {{ ds.meta_data.hostname }}" >>/etc/hosts
- echo "{{ ds.meta_data.hostname }}" >/etc/hostname
files:
- path: /etc/containerd/config.toml
content: |
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "k8s.gcr.io/pause:3.2"
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."REGISTRY-FQDN"]
endpoint = ["https://REGISTRY-FQDN"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."REGISTRY-FQDN".tls]
insecure_skip_verify = true
useExperimentalRetryJoin: true
users:
- name: capv
sshAuthorizedKeys:
- ssh-rsa OMITTED
sudo: ALL=(ALL) NOPASSWD:ALL
replicas: 1
version: v1.18.2
Other Articles in this Series\ Cluster API Setup Steps (vSphere)\ Cluster API Workload Cluster (vSphere)