<h2id="issues-with-btrfs">Issues with BTRFS<aclass="headerlink"href="#issues-with-btrfs"title="Permanent link">¶</a></h2>
<ul>
<li>As <ahref="https://github.com/jaredallard">@jaredallard</a><ahref="https://github.com/rancher/k3d/pull/48">pointed out</a>, people running <code>k3d</code> on a system with <strong>btrfs</strong>, may need to mount <code>/dev/mapper</code> into the nodes for the setup to work.<ul>
<li>This will do: <code>k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper</code></li>
<li>This will do: <codeclass="highlight">k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper</code></li>
</ul>
</li>
</ul>
<h2id="issues-with-zfs">Issues with ZFS<aclass="headerlink"href="#issues-with-zfs"title="Permanent link">¶</a></h2>
<ul>
<li>
<p>k3s currently has <ahref="https://github.com/rancher/k3s/issues/66">no support for ZFS</a> and thus, creating multi-server setups (e.g. <code>k3d cluster create multiserver --servers 3</code>) fails, because the initializing server node (server flag <code>--cluster-init</code>) errors out with the following log:</p>
<p>k3s currently has <ahref="https://github.com/rancher/k3s/issues/66">no support for ZFS</a> and thus, creating multi-server setups (e.g. <codeclass="highlight">k3d cluster create multiserver --servers <spanclass="m">3</span></code>) fails, because the initializing server node (server flag <code>--cluster-init</code>) errors out with the following log:</p>
<divclass="highlight"><pre><span></span><code>starting kubernetes: preparing server: start cluster and https: raft_init<spanclass="o">()</span>: io: create I/O capabilities probe file: posix_allocate: operation not supported on socket
</code></pre></div>
<ul>
@ -1091,7 +1079,13 @@
<li>Possible <ahref="https://github.com/rancher/k3d/issues/133#issuecomment-549065666">fix/workaround by @zer0def</a>:<ul>
<li>use a docker storage driver which cleans up properly (e.g. overlay2)</li>
<li>clean up or expand docker root filesystem</li>
<li>change the kubelet’s eviction thresholds upon cluster creation: <code>k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'</code></li>
<li>
<p>change the kubelet’s eviction thresholds upon cluster creation:</p>
<li>Note: There are many ways to use the <code>"</code> and <code>'</code> quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands</li>
<li><strong>Note</strong>: There are many ways to use the <code>"</code> and <code>'</code> quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands</li>
</ul>
</li>
</ul>
@ -1129,11 +1132,17 @@
<li>As of version v3.1.0, we’re injecting the <code>host.k3d.internal</code> entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as <code>host.k3d.internal</code></li>
</ul>
<h2id="running-behind-a-corporate-proxy">Running behind a corporate proxy<aclass="headerlink"href="#running-behind-a-corporate-proxy"title="Permanent link">¶</a></h2>
<p>Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue.
<p>Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue.<br/>
Some can be fixed by passing the <code>HTTP_PROXY</code> environment variables to k3d, some have to be fixed in docker’s <code>daemon.json</code> file and some are as easy as adding a volume mount.</p>
<h3id="pods-fail-to-start-x509-certificate-signed-by-unknown-authority">Pods fail to start: <code>x509: certificate signed by unknown authority</code><aclass="headerlink"href="#pods-fail-to-start-x509-certificate-signed-by-unknown-authority"title="Permanent link">¶</a></h3>
<ul>
<li>Example Error Message: <code>Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: x509: certificate signed by unknown authority</code></li>
<li>
<p>Example Error Message:</p>
<divclass="highlight"><pre><span></span><code>Failed to create pod sandbox: rpc error: <spanclass="nv">code</span><spanclass="o">=</span> Unknown <spanclass="nv">desc</span><spanclass="o">=</span> failed to get sandbox image <spanclass="s2">"docker.io/rancher/pause:3.1"</span>: failed to pull image <spanclass="s2">"docker.io/rancher/pause:3.1"</span>: failed to pull and unpack image <spanclass="s2">"docker.io/rancher/pause:3.1"</span>: failed to resolve reference <spanclass="s2">"docker.io/rancher/pause:3.1"</span>: failed to <spanclass="k">do</span> request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: x509: certificate signed by unknown authority
</code></pre></div>
</li>
</ul>
<ul>
<li>Problem: inside the container, the certificate of the corporate proxy cannot be validated</li>
<li>Possible Solution: Mounting the CA Certificate from your host into the node containers at start time via <code>k3d cluster create --volume /path/to/your/certs.crt:/etc/ssl/certs/yourcert.crt</code></li>
@ -1153,7 +1162,16 @@ Some can be fixed by passing the <code>HTTP_PROXY</code> environment variables t
</ul>
</li>
<li>Why: The issue was introduced by a change in the Linux kernel (<ahref="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.12.2">Changelog 5.12.2</a>: <ahref="https://github.com/torvalds/linux/commit/671c54ea8c7ff47bd88444f3fffb65bf9799ce43">Commit</a>), that changed the netfilter_conntrack behavior in a way that <code>kube-proxy</code> is not able to set the <code>nf_conntrack_max</code> value anymore</li>
<li>Workaround: as a workaround, we can tell <code>kube-proxy</code> to not even try to set this value: <code>k3d cluster create --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" --image rancher/k3s:v1.20.6-k3s</code></li>
<li>
<p>Workaround: as a workaround, we can tell <code>kube-proxy</code> to not even try to set this value:</p>
<li>Fix: This is going to be fixed “upstream” in k3s itself in <ahref="https://github.com/k3s-io/k3s/pull/3337">rancher/k3s#3337</a> and backported to k3s versions as low as v1.18.</li>
<p>- <strong>hot-reloading</strong> of code when developing on k3d (Python Flask App)
- build-deploy-test cycle using <strong>Tilt</strong>
- full cluster lifecycle for simple and <strong>multi-server</strong> clusters
- Proof of Concept of using k3d as a service in <strong>Drone CI</strong></p>
<ul>
<li><strong>hot-reloading</strong> of code when developing on k3d (Python Flask App)</li>
<li>build-deploy-test cycle using <strong>Tilt</strong></li>
<li>full cluster lifecycle for simple and <strong>multi-server</strong> clusters</li>
<li>Proof of Concept of using k3d as a service in <strong>Drone CI</strong></li>
</ul>
</div>
<ul>
<li><ahref="https://www.youtube.com/watch?v=hMr3prm9gDM">Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube)</a></li>
@ -1219,7 +1209,7 @@
<divclass="md-source-date">
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">May 26, 2021</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">June 9, 2021</span>
<h2id="multiple-server-nodes">Multiple server nodes<aclass="headerlink"href="#multiple-server-nodes"title="Permanent link">¶</a></h2>
<ul>
<li>multiple server nodes<ul>
<li>by default, when <code>--server</code>> 1 and no <code>--datastore-x</code> option is set, the first server node (server-0) will be the initializing server node<ul>
<li>the initializing server node will have the <code>--cluster-init</code> flag appended</li>
<li>all other server nodes will refer to the initializing server node via <code>--server https://<init-node>:6443</code></li>
<li>by default, we expose the API-Port (<code>6443</code>) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)</li>
<li>port <code>6443</code> of the loadbalancer is then mapped to a specific (<code>--api-port</code> flag) or a random (default) port on the host system</li>
<p>When using the <code>--network</code> flag to connect to the host network (i.e. <code>k3d cluster create --network host</code>),
you won’t be able to create more than <strong>one server node</strong>.
<p>When using the <code>--network</code> flag to connect to the host network (i.e. <code>k3d cluster create --network host</code>), you won’t be able to create more than <strong>one server node</strong>.<br/>
An edge case would be one server node (with agent disabled) and one agent node.</p>
<p>By default, every network that k3d creates is working in <code>bridge</code> mode.
But when you try to use <code>--network bridge</code> to connect to docker’s internal <code>bridge</code> network, you may
run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.</p>
<p>By default, every network that k3d creates is working in <code>bridge</code> mode.<br/>
But when you try to use <code>--network bridge</code> to connect to docker’s internal <code>bridge</code> network, you may run into issues with grabbing certificates from the API-Server.<br/>
<p>As of k3d v4.0.0, released in January 2021, k3d ships with configuration file support for the <code>k3d cluster create</code> command.
<p>As of k3d v4.0.0, released in January 2021, k3d ships with configuration file support for the <code>k3d cluster create</code> command.<br/>
This allows you to define all the things that you defined with CLI flags before in a nice and tidy YAML (as a Kubernetes user, we know you love it ;) ).</p>
<divclass="admonition info">
<pclass="admonition-title">Syntax & Semantics</p>
<p>The options defined in the config file are not 100% the same as the CLI flags.
<p>The options defined in the config file are not 100% the same as the CLI flags.<br/>
This concerns naming and style/usage/structure, e.g.</p>
<ul>
<li><code>--api-port</code> is split up into a field named <code>kubeAPI</code> that has 3 different “child fields” (<code>host</code>, <code>hostIP</code> and <code>hostPort</code>)</li>
@ -1053,13 +1041,13 @@ This concerns naming and style/usage/structure, e.g.</p>
<p>The configuration options for k3d are continuously evolving and so is the config file (syntax) itself.
<p>The configuration options for k3d are continuously evolving and so is the config file (syntax) itself.<br/>
Currently, the config file is still in an Alpha-State, meaning, that it is subject to change anytime (though we try to keep breaking changes low).</p>
<divclass="admonition info">
<pclass="admonition-title">Validation via JSON-Schema</p>
<p>k3d uses a <ahref="https://json-schema.org/">JSON-Schema</a> to describe the expected format and fields of the configuration file.
This schema is also used to <ahref="https://github.com/xeipuuv/gojsonschema#validation">validate</a> a user-given config file.
This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. <ahref="https://github.com/rancher/k3d/blob/main/pkg/config/v1alpha2/schema.json">here for <code>v1alpha2</code></a>) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.</p>
<p>k3d uses a <ahref="https://json-schema.org/">JSON-Schema</a> to describe the expected format and fields of the configuration file.<br/>
This schema is also used to <ahref="https://github.com/xeipuuv/gojsonschema#validation">validate</a> a user-given config file.<br/>
This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. <ahref="https://github.com/rancher/k3d/blob/main/pkg/config/v1alpha2/schema.json">here for <code>v1alpha2</code></a>) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.</p>
<p>Since the config options and the config file are changing quite a bit, it’s hard to keep track of all the supported config file settings, so here’s an example showing all of them as of the time of writing:</p>
@ -1121,14 +1109,16 @@ This JSON-Schema can be found in the specific config version sub-directory in th
<spanclass="nt">gpuRequest</span><spanclass="p">:</span><spanclass="l l-Scalar l-Scalar-Plain">all</span><spanclass="c1"># same as `--gpus all`</span>
</code></pre></div>
<h2id="config-file-vs-cli-flags">Config File vs. CLI Flags<aclass="headerlink"href="#config-file-vs-cli-flags"title="Permanent link">¶</a></h2>
<p>k3d uses <ahref="https://github.com/spf13/cobra"><code>Cobra</code></a> and <ahref="https://github.com/spf13/viper"><code>Viper</code></a> for CLI and general config handling respectively.
<p>k3d uses <ahref="https://github.com/spf13/cobra"><code>Cobra</code></a> and <ahref="https://github.com/spf13/viper"><code>Viper</code></a> for CLI and general config handling respectively.<br/>
This automatically introduces a “config option order of priority” (<ahref="https://github.com/spf13/viper#why-viper">precedence order</a>):</p>
<p>This means, that you can define e.g. a “base configuration file” with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments.
<p>This means, that you can define e.g. a “base configuration file” with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments.<br/>
For example, you use the same config file to create three clusters which only have different names and <code>kubeAPI</code> (<code>--api-port</code>) settings.</p>
<h1id="use-calico-instead-of-flannel">Use Calico instead of Flannel<aclass="headerlink"href="#use-calico-instead-of-flannel"title="Permanent link">¶</a></h1>
<p>If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel.</p>
<h3id="1-download-and-modify-the-calico-descriptor">1. Download and modify the Calico descriptor<aclass="headerlink"href="#1-download-and-modify-the-calico-descriptor"title="Permanent link">¶</a></h3>
<h2id="1-download-and-modify-the-calico-descriptor">1. Download and modify the Calico descriptor<aclass="headerlink"href="#1-download-and-modify-the-calico-descriptor"title="Permanent link">¶</a></h2>
<p>You can following the <ahref="https://docs.projectcalico.org/master/reference/cni-plugin/configuration">documentation</a></p>
<p>And then you have to change the ConfigMap <code>calico-config</code>. On the <code>cni_network_config</code> add the entry for allowing IP forwarding<br/>
<p>And then you have to change the ConfigMap <code>calico-config</code>. On the <code>cni_network_config</code> add the entry for allowing IP forwarding</p>
Or you can directly use this <ahref="../calico.yaml">calico.yaml</a> manifest</p>
<p>Or you can directly use this <ahref="../calico.yaml">calico.yaml</a> manifest</p>
<h2id="2-create-the-cluster-without-flannel-and-with-calico">2. Create the cluster without flannel and with calico<aclass="headerlink"href="#2-create-the-cluster-without-flannel-and-with-calico"title="Permanent link">¶</a></h2>
<p>On the k3s cluster creation :
- add the flag <code>--flannel-backend=none</code>. For this, on k3d you need to forward this flag to k3s with the option <code>--k3s-server-arg</code>.
- mount (<code>--volume</code>) the calico descriptor in the auto deploy manifest directory of k3s <code>/var/lib/rancher/k3s/server/manifests/</code></p>
<p>So the command of the cluster creation is (when you are at root of the k3d repository)
<li>add the flag <code>--flannel-backend=none</code>. For this, on k3d you need to forward this flag to k3s with the option <code>--k3s-server-arg</code>.</li>
<li>mount (<code>--volume</code>) the calico descriptor in the auto deploy manifest directory of k3s <code>/var/lib/rancher/k3s/server/manifests/</code></li>
</ul>
<p>So the command of the cluster creation is (when you are at root of the k3d repository)</p>
<li>you can use the auto deploy manifest or a kubectl apply depending on your needs</li>
<li><imgalt="❗"class="twemoji"src="https://twemoji.maxcdn.com/v/latest/svg/2757.svg"title=":exclamation:"/> Calico is not as quick as Flannel (but it provides more features)</li>
<h1id="running-cuda-workloads">Running CUDA workloads<aclass="headerlink"href="#running-cuda-workloads"title="Permanent link">¶</a></h1>
<p>If you want to run CUDA workloads on the K3S container you need to customize the container.
CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime.
The K3S container itself also needs to run with this runtime. If you are using Docker you can install the <ahref="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html">NVIDIA Container Toolkit</a>.</p>
<p>If you want to run CUDA workloads on the K3S container you need to customize the container.<br/>
CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime.<br/>
The K3S container itself also needs to run with this runtime.<br/>
If you are using Docker you can install the <ahref="https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html">NVIDIA Container Toolkit</a>.</p>
<h2id="building-a-customized-k3s-image">Building a customized K3S image<aclass="headerlink"href="#building-a-customized-k3s-image"title="Permanent link">¶</a></h2>
<p>To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.</p>
<p>To get the NVIDIA container runtime in the K3S image you need to build your own K3S image.<br/>
The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet.<br/>
To get around this we need to build the image with a supported base image.</p>
<h3id="adapt-the-dockerfile">Adapt the Dockerfile<aclass="headerlink"href="#adapt-the-dockerfile"title="Permanent link">¶</a></h3>
<p>This <ahref="cuda/Dockerfile">Dockerfile</a> is based on the <ahref="https://github.com/rancher/k3s/blob/master/package/Dockerfile">K3S Dockerfile</a>.
<p>This <ahref="cuda/Dockerfile">Dockerfile</a> is based on the <ahref="https://github.com/rancher/k3s/blob/master/package/Dockerfile">K3s Dockerfile</a>.
The following changes are applied:</p>
<ol>
<li>Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed</li>
@ -1105,7 +1096,7 @@ The following changes are applied:</p>
<li>Add a manifest for the NVIDIA driver plugin for Kubernetes</li>
<p>We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a <ahref="config.toml.tmpl">config.toml.tmpl</a> file. More information can be found on the <ahref="https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd">K3S site</a>.</p>
<p>We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3s provides a way to do this using a <ahref="config.toml.tmpl">config.toml.tmpl</a> file. More information can be found on the <ahref="https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd">K3s site</a>.</p>
<h2id="1-via-ingress-recommended">1. via Ingress (recommended)<aclass="headerlink"href="#1-via-ingress-recommended"title="Permanent link">¶</a></h2>
<p>In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
<p>In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.<br/>
Therefore, we have to create the cluster in a way, that the internal port 80 (where the <code>traefik</code> ingress controller is listening on) is exposed on the host system.</p>
<ol>
<li>
@ -952,11 +940,16 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
<divclass="admonition info">
<pclass="admonition-title">Good to know</p>
<ul>
<li><code>--api-port 6550</code> is not required for the example to work. It’s used to have <code>k3s</code>‘s API-Server listening on port 6550 with that port mapped to the host system.</li>
<li>the port-mapping construct <code>8081:80@loadbalancer</code> means
- map port <code>8081</code> from the host to port <code>80</code> on the container which matches the nodefilter <code>loadbalancer</code></li>
<li>the <code>loadbalancer</code> nodefilter matches only the <code>serverlb</code> that’s deployed in front of a cluster’s server nodes
- all ports exposed on the <code>serverlb</code> will be proxied to the same ports on all server nodes in the cluster</li>
<li><code>--api-port 6550</code> is not required for the example to work.<br/>
It’s used to have <code>k3s</code>‘s API-Server listening on port 6550 with that port mapped to the host system.</li>
“map port <code>8081</code> from the host to port <code>80</code> on the container which matches the nodefilter <code>loadbalancer</code>“<ul>
<li>the <code>loadbalancer</code> nodefilter matches only the <code>serverlb</code> that’s deployed in front of a cluster’s server nodes<ul>
<li>all ports exposed on the <code>serverlb</code> will be proxied to the same ports on all server nodes in the cluster</li>
</ul>
</li>
</ul>
</li>
</ul>
</div>
</li>
@ -973,8 +966,8 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
<p><codeclass="highlight">kubectl create service clusterip nginx --tcp<spanclass="o">=</span><spanclass="m">80</span>:80</code></p>
</li>
<li>
<p>Create an ingress object for it with <codeclass="highlight">kubectl apply -f</code>
<em>Note</em>: <code>k3s</code> deploys <ahref="https://github.com/containous/traefik"><code>traefik</code></a> as the default ingress controller</p>
<p>Create an ingress object for it by copying the following manifest to a file and applying with <codeclass="highlight">kubectl apply -f thatfile.yaml</code></p>
<p><strong>Note</strong>: <code>k3s</code> deploys <ahref="https://github.com/containous/traefik"><code>traefik</code></a> as the default ingress controller</p>
<divclass="highlight"><pre><span></span><code><spanclass="c1"># apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19</span>
@ -1002,16 +995,22 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
</ol>
<h2id="2-via-nodeport">2. via NodePort<aclass="headerlink"href="#2-via-nodeport"title="Permanent link">¶</a></h2>
<ol>
<li>Create a cluster, mapping the port 30080 from agent-0 to localhost:8082<p><codeclass="highlight">k3d cluster create mycluster -p <spanclass="s2">"8082:30080@agent[0]"</span> --agents <spanclass="m">2</span></code></p>
<p>- <strong>Note</strong>: Kubernetes’ default NodePort range is <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport"><code>30000-32767</code></a></p>
<p>- <strong>Note</strong>: You may as well expose the whole NodePort range from the very beginning, e.g. via <code>k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"</code> (See <ahref="https://www.youtube.com/watch?v=5HaU6338lAk">this video from @portainer</a>)
- <strong>Warning</strong>: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!</p>
</li>
</ol>
<li>
<p>Create a cluster, mapping the port <code>30080</code> from <code>agent-0</code> to <code>localhost:8082</code></p>
<li><strong>Note 1</strong>: Kubernetes’ default NodePort range is <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport"><code>30000-32767</code></a></li>
<li>
<p><strong>Note 2</strong>: You may as well expose the whole NodePort range from the very beginning, e.g. via <code>k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"</code> (See <ahref="https://www.youtube.com/watch?v=5HaU6338lAk">this video from @portainer</a>)</p>
<ul>
<li><strong>Warning</strong>: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!</li>
</ul>
<p>… (Steps 2 and 3 like above) …</p>
<ol>
</li>
</ul>
</li>
<li>
<p>Create a NodePort service for it with <codeclass="highlight">kubectl apply -f</code></p>
<p>Create a NodePort service for it by copying the following manifest to a file and applying it with <codeclass="highlight">kubectl apply -f</code></p>
<p>When using secure registries, the <ahref="#registries-file"><code>registries.yaml</code> file</a> must include information about the certificates. For example, if you want to use images from the secure registry running at <code>https://my.company.registry</code>, you must first download a CA file valid for that server and store it in some well-known directory like <code>${HOME}/.k3d/my-company-root.pem</code>. </p>
<p>Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a <code>configs</code> section in the <ahref="#registries-file"><code>registries.yaml</code> file</a>.
<p>Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a <code>configs</code> section in the <ahref="#registries-file"><code>registries.yaml</code> file</a>.<br/>
For example, if we mount the CA file in <code>/etc/ssl/certs/my-company-root.pem</code>, the <code>registries.yaml</code> will look like:</p>
@ -1188,10 +1179,16 @@ For example, if we mount the CA file in <code>/etc/ssl/certs/my-company-root.pem
</div>
<h4id="create-a-dedicated-registry-together-with-your-cluster">Create a dedicated registry together with your cluster<aclass="headerlink"href="#create-a-dedicated-registry-together-with-your-cluster"title="Permanent link">¶</a></h4>
<ol>
<li><codeclass="highlight">k3d cluster create mycluster --registry-create</code>: This creates your cluster <code>mycluster</code> together with a registry container called <code>k3d-mycluster-registry</code>
- k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the <code>registries.yaml</code> file)
- the port, which the registry is listening on will be mapped to a random port on your host system</li>
<li>Check the k3d command output or <codeclass="highlight">docker ps -f <spanclass="nv">name</span><spanclass="o">=</span>k3d-mycluster-registry</code> to find the exposed port (let’s use <code>12345</code> here)</li>
<li>
<p><codeclass="highlight">k3d cluster create mycluster --registry-create</code>: This creates your cluster <code>mycluster</code> together with a registry container called <code>k3d-mycluster-registry</code></p>
<ul>
<li>k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the <code>registries.yaml</code> file)</li>
<li>the port, which the registry is listening on will be mapped to a random port on your host system</li>
</ul>
</li>
<li>
<p>Check the k3d command output or <codeclass="highlight">docker ps -f <spanclass="nv">name</span><spanclass="o">=</span>k3d-mycluster-registry</code> to find the exposed port (let’s use <code>12345</code> here)</p>
</li>
<li>Pull some image (optional) <codeclass="highlight">docker pull alpine:latest</code>, re-tag it to reference your newly created registry <codeclass="highlight">docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local</code> and push it <codeclass="highlight">docker push k3d-mycluster-registry:12345/testimage:local</code></li>
<li>Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: <codeclass="highlight">kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null</code> (creates a container that will not do anything but keep on running)</li>
<li>push to your registry from your local development machine.</li>
<li>use images from that registry in <code>Deployments</code> in your k3d cluster.</li>
</ul>
<p>We will verify these two things for a local registry (located at <code>k3d-registry.localhost:12345</code>) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker’s documentation for this).</p>
<p>We will verify these two things for a local registry (located at <code>k3d-registry.localhost:12345</code>) running in your development machine.<br/>
Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker’s documentation for this).</p>
<p>First, we can download some image (like <code>nginx</code>) and push it to our local registry with:</p>
<p>Then you should check that the pod is running with <code>kubectl get pods -l "app=nginx-test-registry"</code>.</p>
<h2id="configuring-registries-for-k3s-v091">Configuring registries for k3s <= v0.9.1<aclass="headerlink"href="#configuring-registries-for-k3s-v091"title="Permanent link">¶</a></h2>
<p>k3s servers below v0.9.1 do not recognize the <code>registries.yaml</code> file as described in
the in the beginning, so you will need to embed the contents of that file in a <code>containerd</code> configuration file.
<p>k3s servers below v0.9.1 do not recognize the <code>registries.yaml</code> file as described in the in the beginning, so you will need to embed the contents of that file in a <code>containerd</code> configuration file.<br/>
You will have to create your own <code>containerd</code> configuration file at some well-known path like <code>${HOME}/.k3d/config.toml.tmpl</code>, like this:</p>
<detailsclass="registriesprev091"><summary>config.toml.tmpl</summary><divclass="highlight"><pre><span></span><code><spanclass="c1"># Original section: no changes</span>
<spanclass="k">[plugins.opt]</span>
@ -1305,7 +1302,7 @@ You will have to create your own <code>containerd</code> configuration file at s
<divclass="md-source-date">
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">March 11, 2021</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">June 9, 2021</span>
<p>By default, k3d will update your default kubeconfig with your new cluster’s details and set the current-context to it (can be disabled).
<p>By default, k3d will update your default kubeconfig with your new cluster’s details and set the current-context to it (can be disabled).<br/>
To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways.</p>
<detailsclass="question"><summary>What is the default kubeconfig?</summary><p>We determine the path of the used or default kubeconfig in two ways:</p>
<ol>
@ -964,35 +952,59 @@ To get a kubeconfig set up for you to connect to a k3d cluster without this auto
</details>
<h2id="getting-the-kubeconfig-for-a-newly-created-cluster">Getting the kubeconfig for a newly created cluster<aclass="headerlink"href="#getting-the-kubeconfig-for-a-newly-created-cluster"title="Permanent link">¶</a></h2>
<ol>
<li>Create a new kubeconfig file <strong>after</strong> cluster creation
<li><em>Note:</em> this won’t switch the current-context</li>
</ul>
</li>
<li>The file will be created if it doesn’t exist</li>
</ul>
</li>
</ol>
<divclass="admonition info">
<pclass="admonition-title">Switching the current context</p>
<p>None of the above options switch the current-context by default.
This is intended to be least intrusive, since the current-context has a global effect.
<p>None of the above options switch the current-context by default.<br/>
This is intended to be least intrusive, since the current-context has a global effect.<br/>
You can switch the current-context directly with the <code>kubeconfig merge</code> command by adding the <code>--kubeconfig-switch-context</code> flag.</p>
</div>
<h2id="removing-cluster-details-from-the-kubeconfig">Removing cluster details from the kubeconfig<aclass="headerlink"href="#removing-cluster-details-from-the-kubeconfig"title="Permanent link">¶</a></h2>
<p><codeclass="highlight">k3d cluster delete mycluster</code> will always remove the details for <code>mycluster</code> from the default kubeconfig.
It will also delete the respective kubeconfig file in <code>$HOME/.k3d/</code> if it exists.</p>
<p><code>k3d kubeconfig merge</code> let’s you specify one or more clusters via arguments <em>or</em> all via <code>--all</code>.
All kubeconfigs will then be merged into a single file if <code>--kubeconfig-merge-default</code> or <code>--output</code> is specified.
If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. <code>$HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml</code>) will be returned.
<p><code>k3d kubeconfig merge</code> let’s you specify one or more clusters via arguments <em>or</em> all via <code>--all</code>.<br/>
All kubeconfigs will then be merged into a single file if <code>--kubeconfig-merge-default</code> or <code>--output</code> is specified.<br/>
If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. <code>$HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml</code>) will be returned.<br/>
Note, that with multiple cluster specified, the <code>--kubeconfig-switch-context</code> flag will change the current context to the cluster which was last in the list.</p>
@ -1002,7 +1014,7 @@ Note, that with multiple cluster specified, the <code>--kubeconfig-switch-contex
<divclass="md-source-date">
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">January 5, 2021</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">June 9, 2021</span>
<p>Create a cluster with 3 server nodes using k3s’ embedded etcd (old: dqlite) database.
The first server to be created will use the <code>--cluster-init</code> flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes.</p>
<h2id="adding-server-nodes-to-a-running-cluster">Adding server nodes to a running cluster<aclass="headerlink"href="#adding-server-nodes-to-a-running-cluster"title="Permanent link">¶</a></h2>
<p>In theory (and also in practice in most cases), this is as easy as executing the following command:</p>
<divclass="highlight"><pre><span></span><code>k3d node create newserver --cluster multiserver --role server
<divclass="highlight"><pre><span></span><code>k3d node create newserver --cluster multiserver --role server
</code></pre></div>
<divclass="admonition important">
<pclass="admonition-title">There’s a trap!</p>
<p>If your cluster was initially created with only a single server node, then this will fail.
That’s because the initial server node was not started with the <code>--cluster-init</code> flag and thus is not using the dqlite backend.</p>
<p>If your cluster was initially created with only a single server node, then this will fail.<br/>
That’s because the initial server node was not started with the <code>--cluster-init</code> flag and thus is not using the etcd (old: dqlite) backend.</p>
</div>
@ -967,7 +955,7 @@ That’s because the initial server node was not started with the <code>--cl
<divclass="md-source-date">
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">February 9, 2021</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">June 9, 2021</span>