<h1id="faq-nice-to-know">FAQ / Nice to know<aclass="headerlink"href="#faq-nice-to-know"title="Permanent link">¶</a></h1>
<h2id="issues-with-btrfs">Issues with BTRFS<aclass="headerlink"href="#issues-with-btrfs"title="Permanent link">¶</a></h2>
<ul>
<li>As <ahref="https://github.com/jaredallard">@jaredallard</a><ahref="https://github.com/rancher/k3d/pull/48">pointed out</a>, people running <code>k3d</code> on a system with <strong>btrfs</strong>, may need to mount <code>/dev/mapper</code> into the nodes for the setup to work.</li>
<li>As <ahref="https://github.com/jaredallard">@jaredallard</a><ahref="https://github.com/rancher/k3d/pull/48">pointed out</a>, people running <code>k3d</code> on a system with <strong>btrfs</strong>, may need to mount <code>/dev/mapper</code> into the nodes for the setup to work.<ul>
<li>This will do: <code>k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper</code></li>
</ul>
</li>
</ul>
<h2id="issues-with-zfs">Issues with ZFS<aclass="headerlink"href="#issues-with-zfs"title="Permanent link">¶</a></h2>
<ul>
<li>k3s currently has <ahref="https://github.com/rancher/k3s/issues/66">no support for ZFS</a> and thus, creating multi-server setups (e.g. <code>k3d cluster create multiserver --servers 3</code>) fails, because the initializing server node (server flag <code>--cluster-init</code>) errors out with the following log:</li>
</ul>
<li>
<p>k3s currently has <ahref="https://github.com/rancher/k3s/issues/66">no support for ZFS</a> and thus, creating multi-server setups (e.g. <code>k3d cluster create multiserver --servers 3</code>) fails, because the initializing server node (server flag <code>--cluster-init</code>) errors out with the following log:</p>
<divclass="highlight"><pre><span></span><code>starting kubernetes: preparing server: start cluster and https: raft_init<spanclass="o">()</span>: io: create I/O capabilities probe file: posix_allocate: operation not supported on socket
</code></pre></div>
<ul>
<li>This issue can be worked around by providing docker with a different filesystem (that’s also better for docker-in-docker stuff).</li>
<li>A possible solution can be found here: <ahref="https://github.com/rancher/k3s/issues/1688#issuecomment-619570374">https://github.com/rancher/k3s/issues/1688#issuecomment-619570374</a></li>
</ul>
</li>
</ul>
<h2id="pods-evicted-due-to-lack-of-disk-space">Pods evicted due to lack of disk space<aclass="headerlink"href="#pods-evicted-due-to-lack-of-disk-space"title="Permanent link">¶</a></h2>
<ul>
<li>Pods go to evicted state after doing X</li>
<li>Pods go to evicted state after doing X<ul>
<li>Related issues: <ahref="https://github.com/rancher/k3d/issues/133">#133 - Pods evicted due to <code>NodeHasDiskPressure</code></a> (collection of #119 and #130)</li>
<li>Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet</li>
<li>Possible <ahref="https://github.com/rancher/k3d/issues/133#issuecomment-549065666">fix/workaround by @zer0def</a>:<ul>
@ -1069,6 +1095,8 @@
</ul>
</li>
</ul>
</li>
</ul>
<h2id="restarting-a-multi-server-cluster-or-the-initializing-server-node-fails">Restarting a multi-server cluster or the initializing server node fails<aclass="headerlink"href="#restarting-a-multi-server-cluster-or-the-initializing-server-node-fails"title="Permanent link">¶</a></h2>
<ul>
<li>What you do: You create a cluster with more than one server node and later, you either stop <code>server-0</code> or stop/start the whole cluster</li>
@ -1081,19 +1109,21 @@
<ul>
<li>The Problem: Passing a feature flag to the Kubernetes API Server running inside k3s.</li>
<li>Example: you want to enable the EphemeralContainers feature flag in Kubernetes</li>
<li>Note: There are many ways to use the <code>"</code> and <code>'</code> quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands</li>
</ul>
</li>
</ul>
<h2id="how-to-access-services-like-a-database-running-on-my-docker-host-machine">How to access services (like a database) running on my Docker Host Machine<aclass="headerlink"href="#how-to-access-services-like-a-database-running-on-my-docker-host-machine"title="Permanent link">¶</a></h2>
<ul>
<li>As of version v3.1.0, we’re injecting the <code>host.k3d.internal</code> entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as <code>host.k3d.internal</code></li>
@ -1118,8 +1148,10 @@ Some can be fixed by passing the <code>HTTP_PROXY</code> environment variables t
</ul>
<h2id="nodes-fail-to-start-or-get-stuck-in-notready-state-with-log-nf_conntrack_max-permission-denied">Nodes fail to start or get stuck in <code>NotReady</code> state with log <code>nf_conntrack_max: permission denied</code><aclass="headerlink"href="#nodes-fail-to-start-or-get-stuck-in-notready-state-with-log-nf_conntrack_max-permission-denied"title="Permanent link">¶</a></h2>
<ul>
<li>When: This happens when running k3d on a Linux system with a kernel version >= 5.12.2 (and others like >= 5.11.19) when creating a new cluster</li>
<li>When: This happens when running k3d on a Linux system with a kernel version >= 5.12.2 (and others like >= 5.11.19) when creating a new cluster<ul>
<li>the node(s) stop or get stuck with a log line like this: <code><TIMESTAMP> F0516 05:05:31.782902 7 server.go:495] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied</code></li>
</ul>
</li>
<li>Why: The issue was introduced by a change in the Linux kernel (<ahref="https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.12.2">Changelog 5.12.2</a>: <ahref="https://github.com/torvalds/linux/commit/671c54ea8c7ff47bd88444f3fffb65bf9799ce43">Commit</a>), that changed the netfilter_conntrack behavior in a way that <code>kube-proxy</code> is not able to set the <code>nf_conntrack_max</code> value anymore</li>
<li>Workaround: as a workaround, we can tell <code>kube-proxy</code> to not even try to set this value: <code>k3d cluster create --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" --image rancher/k3s:v1.20.6-k3s</code></li>
<li>Fix: This is going to be fixed “upstream” in k3s itself in <ahref="https://github.com/k3s-io/k3s/pull/3337">rancher/k3s#3337</a> and backported to k3s versions as low as v1.18.</li>
<detailsclass="tip"><summary>View a quick demo</summary><p><asciinema-playersrc="/static/asciicast/20200715_k3d.01.cast"cols=200rows=32></asciinema-player></p>
<p>- <strong>hot-reloading</strong> of code when developing on k3d (Python Flask App)
- build-deploy-test cycle using <strong>Tilt</strong>
- full cluster lifecycle for simple and <strong>multi-server</strong> clusters
- Proof of Concept of using k3d as a service in <strong>Drone CI</strong></p>
</div>
<ul>
<li><ahref="https://www.youtube.com/watch?v=hMr3prm9gDM">Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube)</a></li>
<h4id="install-current-latest-release">Install current latest release<aclass="headerlink"href="#install-current-latest-release"title="Permanent link">¶</a></h4>
<ul>
<li>use the install script to grab the latest release:</li>
<p><em>Note</em>: The formula can be found in <ahref="https://github.com/Homebrew/homebrew-core/blob/master/Formula/k3d.rb">homebrew/homebrew-core</a> and is mirrored to <ahref="https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/k3d.rb">homebrew/linuxbrew-core</a></p>
<p><em>Note</em>: package source can be found in <ahref="https://github.com/erwinkersten/chocolatey-packages/tree/master/automatic/k3d">erwinkersten/chocolatey-packages</a></p>
</li>
<li>Formula can be found in <ahref="https://github.com/Homebrew/homebrew-core/blob/master/Formula/k3d.rb">homebrew/homebrew-core</a> and is mirrored to <ahref="https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/k3d.rb">homebrew/linuxbrew-core</a></li>
<li>install via <ahref="https://aur.archlinux.org/">AUR</a> package <ahref="https://aur.archlinux.org/packages/rancher-k3d-bin/">rancher-k3d-bin</a>: <code>yay -S rancher-k3d-bin</code></li>
<li>grab a release from the <ahref="https://github.com/rancher/k3d/releases">release tab</a> and install it yourself.</li>
</ul>
<ul>
<li><ahref="https://github.com/alexellis/arkade">arkade</a>: <code>arkade get k3d</code></li>
<p><em>Note</em>: <code>asdf plugin-add k3d</code>, then <code>asdf install k3d <tag></code> with <code><tag> = latest</code> or <code>4.x.x</code> for a specific version (maintained by <ahref="https://github.com/spencergilbert/asdf-k3d">spencergilbert/asdf-k3d</a>)</p>
</li>
</ul>
<ul>
<li>Others<ul>
<li>install via go: <codeclass="highlight">go install github.com/rancher/k3d</code> (<strong>Note</strong>: this will give you unreleased/bleeding-edge changes)</li>
<li>use <ahref="https://github.com/alexellis/arkade">arkade</a>: <code>arkade get k3d</code></li>
<li>use <ahref="https://asdf-vm.com">asdf</a>: <code>asdf plugin-add k3d</code>, then <code>asdf install k3d <tag></code> with <code><tag> = latest</code> or <code>3.x.x</code> for a specific version (maintained by <ahref="https://github.com/spencergilbert/asdf-k3d">spencergilbert/asdf-k3d</a>)</li>
<li>use <ahref="https://chocolatey.org/">Chocolatey</a>: <code>choco install k3d</code> (Chocolatey package manager is available for Windows)</li>
<li>package source can be found in <ahref="https://github.com/erwinkersten/chocolatey-packages/tree/master/automatic/k3d">erwinkersten/chocolatey-packages</a></li>
<p>Get the new cluster’s connection details merged into your default kubeconfig (usually specified using the <code>KUBECONFIG</code> environment variable or the default path <codeclass="highlight"><spanclass="nv">$HOME</span>/.kube/config</code>) and directly switch to the new context:</p>
<p>Use the new cluster with <ahref="https://kubernetes.io/docs/tasks/tools/install-kubectl/"><code>kubectl</code></a>, e.g.:</p>
<divclass="highlight"><pre><span></span><code>kubectl get nodes
</code></pre></div>
<detailsclass="note"><summary>Getting the cluster’s kubeconfig (included in <code>k3d cluster create</code>)</summary><p>Get the new cluster’s connection details merged into your default kubeconfig (usually specified using the <code>KUBECONFIG</code> environment variable or the default path <codeclass="highlight"><spanclass="nv">$HOME</span>/.kube/config</code>) and directly switch to the new context:</p>
<li><ahref="https://github.com/inercia/vscode-k3d/">vscode-k3d</a>: VSCode Extension to handle k3d clusters from within VSCode</li>
<li><ahref="https://github.com/inercia/k3x">k3x</a>: a graphics interface (for Linux) to k3d.</li>
<li><ahref="https://github.com/AbsaOSS/k3d-action">AbsaOSS/k3d-action</a>: fully customizable GitHub Action to run lightweight Kubernetes clusters.</li>
</ul>
@ -1075,7 +1218,7 @@
<divclass="md-source-date">
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">February 4, 2021</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">May 21, 2021</span>
<li>by default, when <code>--server</code>> 1 and no <code>--datastore-x</code> option is set, the first server node (server-0) will be the initializing server node<ul>
<li>the initializing server node will have the <code>--cluster-init</code> flag appended</li>
<li>all other server nodes will refer to the initializing server node via <code>--server https://<init-node>:6443</code></li>
</ul>
</li>
<li>API-Ports</li>
</ul>
</li>
<li>API-Ports<ul>
<li>by default, we expose the API-Port (<code>6443</code>) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)</li>
<li>port <code>6443</code> of the loadbalancer is then mapped to a specific (<code>--api-port</code> flag) or a random (default) port on the host system</li>
<li>kubeconfig</li>
</ul>
</li>
<li>kubeconfig<ul>
<li>if <code>--kubeconfig-update-default</code> is set, we use the default loading rules to get the default kubeconfig:<ul>
<li>First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified)</li>
<li>Second: default kubeconfig in home directory (e.g. <code>$HOME/.kube/config</code>)</li>
</ul>
</li>
<li>Networking</li>
</ul>
</li>
<li>Networking<ul>
<li><ahref="./networking">by default, k3d creates a new (docker) network for every cluster</a></li>
</ul>
</li>
</ul>
@ -886,7 +916,7 @@
<divclass="md-source-date">
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">January 5, 2021</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">May 21, 2021</span>
<li><code>--api-port 6550</code> is not required for the example to work. It’s used to have <code>k3s</code>‘s API-Server listening on port 6550 with that port mapped to the host system.</li>
<li>map port <code>8081</code> from the host to port <code>80</code> on the container which matches the nodefilter <code>loadbalancer</code></li>
</ul>
</li>
<li>the <code>loadbalancer</code> nodefilter matches only the <code>serverlb</code> that’s deployed in front of a cluster’s server nodes<ul>
<li>all ports exposed on the <code>serverlb</code> will be proxied to the same ports on all server nodes in the cluster</li>
</ul>
</li>
</ul>
</div>
<p>!!! info “Good to know”
- <code>--api-port 6550</code> is not required for the example to work. It’s used to have <code>k3s</code>‘s API-Server listening on port 6550 with that port mapped to the host system.
- the port-mapping construct <code>8081:80@loadbalancer</code> means
- map port <code>8081</code> from the host to port <code>80</code> on the container which matches the nodefilter <code>loadbalancer</code>
- the <code>loadbalancer</code> nodefilter matches only the <code>serverlb</code> that’s deployed in front of a cluster’s server nodes
- all ports exposed on the <code>serverlb</code> will be proxied to the same ports on all server nodes in the cluster</p>
</li>
<li>
<p>Get the kubeconfig file (redundant, as <code>k3d cluster create</code> already merges it into your default kubeconfig file)</p>
@ -984,18 +998,10 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
</ol>
<h2id="2-via-nodeport">2. via NodePort<aclass="headerlink"href="#2-via-nodeport"title="Permanent link">¶</a></h2>
<ol>
<li>
<p>Create a cluster, mapping the port 30080 from agent-0 to localhost:8082</p>
<p><strong>Note</strong>: Kubernetes’ default NodePort range is <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport"><code>30000-32767</code></a></p>
</li>
<li>
<p><strong>Note</strong>: You may as well expose the whole NodePort range from the very beginning, e.g. via <code>k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"</code> (See <ahref="https://www.youtube.com/watch?v=5HaU6338lAk">this video from @portainer</a>)</p>
</li>
<li><strong>Warning</strong>: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!</li>
</ul>
<li>Create a cluster, mapping the port 30080 from agent-0 to localhost:8082<p><codeclass="highlight">k3d cluster create mycluster -p <spanclass="s2">"8082:30080@agent[0]"</span> --agents <spanclass="m">2</span></code></p>
<p>- <strong>Note</strong>: Kubernetes’ default NodePort range is <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport"><code>30000-32767</code></a></p>
<p>- <strong>Note</strong>: You may as well expose the whole NodePort range from the very beginning, e.g. via <code>k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"</code> (See <ahref="https://www.youtube.com/watch?v=5HaU6338lAk">this video from @portainer</a>)
- <strong>Warning</strong>: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!</p>
</li>
</ol>
<p>… (Steps 2 and 3 like above) …</p>
@ -1166,11 +1188,9 @@ For example, if we mount the CA file in <code>/etc/ssl/certs/my-company-root.pem
</div>
<h4id="create-a-dedicated-registry-together-with-your-cluster">Create a dedicated registry together with your cluster<aclass="headerlink"href="#create-a-dedicated-registry-together-with-your-cluster"title="Permanent link">¶</a></h4>
<ol>
<li><codeclass="highlight">k3d cluster create mycluster --registry-create</code>: This creates your cluster <code>mycluster</code> together with a registry container called <code>k3d-mycluster-registry</code><ul>
<li>k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the <code>registries.yaml</code> file)</li>
<li>the port, which the registry is listening on will be mapped to a random port on your host system</li>
</ul>
</li>
<li><codeclass="highlight">k3d cluster create mycluster --registry-create</code>: This creates your cluster <code>mycluster</code> together with a registry container called <code>k3d-mycluster-registry</code>
- k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the <code>registries.yaml</code> file)
- the port, which the registry is listening on will be mapped to a random port on your host system</li>
<li>Check the k3d command output or <codeclass="highlight">docker ps -f <spanclass="nv">name</span><spanclass="o">=</span>k3d-mycluster-registry</code> to find the exposed port (let’s use <code>12345</code> here)</li>
<li>Pull some image (optional) <codeclass="highlight">docker pull alpine:latest</code>, re-tag it to reference your newly created registry <codeclass="highlight">docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local</code> and push it <codeclass="highlight">docker push k3d-mycluster-registry:12345/testimage:local</code></li>
<li>Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: <codeclass="highlight">kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null</code> (creates a container that will not do anything but keep on running)</li>
@ -942,35 +964,21 @@ To get a kubeconfig set up for you to connect to a k3d cluster without this auto
</details>
<h2id="getting-the-kubeconfig-for-a-newly-created-cluster">Getting the kubeconfig for a newly created cluster<aclass="headerlink"href="#getting-the-kubeconfig-for-a-newly-created-cluster"title="Permanent link">¶</a></h2>
<ol>
<li>Create a new kubeconfig file <strong>after</strong> cluster creation<ul>