<p><strong>This page is targeting k3d v3.0.0 and newer!</strong></p>
<p><strong>This page is targeting k3d v4.0.0 and newer!</strong></p>
<p>k3d is a lightweight wrapper to run <ahref="https://github.com/rancher/k3s">k3s</a> (Rancher Lab’s minimal Kubernetes distribution) in docker.</p>
<p>k3d is a lightweight wrapper to run <ahref="https://github.com/rancher/k3s">k3s</a> (Rancher Lab’s minimal Kubernetes distribution) in docker.</p>
<p>k3d makes it very easy to create single- and multi-node <ahref="https://github.com/rancher/k3s">k3s</a> clusters in docker, e.g. for local development on Kubernetes.</p>
<p>k3d makes it very easy to create single- and multi-node <ahref="https://github.com/rancher/k3s">k3s</a> clusters in docker, e.g. for local development on Kubernetes.</p>
<detailsclass="tip"><summary>View a quick demo</summary><p><asciinema-playersrc="/static/asciicast/20200715_k3d.01.cast"cols=200rows=32></asciinema-player></p>
<detailsclass="tip"><summary>View a quick demo</summary><p><asciinema-playersrc="/static/asciicast/20200715_k3d.01.cast"cols=200rows=32></asciinema-player></p>
<p>use <ahref="https://brew.sh">Homebrew</a>: <codeclass="highlight">brew install k3d</code> (Homebrew is available for MacOS and Linux)</p>
<p>use <ahref="https://brew.sh">Homebrew</a>: <codeclass="highlight">brew install k3d</code> (Homebrew is available for MacOS and Linux)</p>
<ul>
<li>Formula can be found in <ahref="https://github.com/Homebrew/homebrew-core/blob/master/Formula/k3d.rb">homebrew/homebrew-core</a> and is mirrored to <ahref="https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/k3d.rb">homebrew/linuxbrew-core</a></li>
</ul>
</li>
</li>
<li>Formula can be found in <ahref="https://github.com/Homebrew/homebrew-core/blob/master/Formula/k3d.rb">homebrew/homebrew-core</a> and is mirrored to <ahref="https://github.com/Homebrew/linuxbrew-core/blob/master/Formula/k3d.rb">homebrew/linuxbrew-core</a></li>
<li>install via <ahref="https://aur.archlinux.org/">AUR</a> package <ahref="https://aur.archlinux.org/packages/rancher-k3d-bin/">rancher-k3d-bin</a>: <code>yay -S rancher-k3d-bin</code></li>
<li>install via <ahref="https://aur.archlinux.org/">AUR</a> package <ahref="https://aur.archlinux.org/packages/rancher-k3d-bin/">rancher-k3d-bin</a>: <code>yay -S rancher-k3d-bin</code></li>
<li>grab a release from the <ahref="https://github.com/rancher/k3d/releases">release tab</a> and install it yourself.</li>
<li>grab a release from the <ahref="https://github.com/rancher/k3d/releases">release tab</a> and install it yourself.</li>
<li>install via go: <codeclass="highlight">go install github.com/rancher/k3d</code> (<strong>Note</strong>: this will give you unreleased/bleeding-edge changes)</li>
<li>install via go: <codeclass="highlight">go install github.com/rancher/k3d</code> (<strong>Note</strong>: this will give you unreleased/bleeding-edge changes)</li>
<p>Get the new cluster’s connection details merged into your default kubeconfig (usually specified using the <code>KUBECONFIG</code> environment variable or the default path <codeclass="highlight"><spanclass="nv">$HOME</span>/.kube/config</code>) and directly switch to the new context:</p>
<p>Get the new cluster’s connection details merged into your default kubeconfig (usually specified using the <code>KUBECONFIG</code> environment variable or the default path <codeclass="highlight"><spanclass="nv">$HOME</span>/.kube/config</code>) and directly switch to the new context:</p>
<li>by default, when <code>--server</code>> 1 and no <code>--datastore-x</code> option is set, the first server node (server-0) will be the initializing server node<ul>
<li>by default, when <code>--server</code>> 1 and no <code>--datastore-x</code> option is set, the first server node (server-0) will be the initializing server node<ul>
<li>the initializing server node will have the <code>--cluster-init</code> flag appended</li>
<li>the initializing server node will have the <code>--cluster-init</code> flag appended</li>
<li>all other server nodes will refer to the initializing server node via <code>--server https://<init-node>:6443</code></li>
<li>all other server nodes will refer to the initializing server node via <code>--server https://<init-node>:6443</code></li>
</ul>
</ul>
</li>
</li>
</ul>
<li>API-Ports</li>
</li>
<li>by default, we expose the API-Port (<code>6443</code>) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)</li>
<li>API-Ports<ul>
<li>port <code>6443</code> of the loadbalancer is then mapped to a specific (<code>--api-port</code> flag) or a random (default) port on the host system</li>
<li>by default, we don’t expose any API-Port (no host port mapping)</li>
<li>kubeconfig</li>
</ul>
<li>if <code>--kubeconfig-update-default</code> is set, we use the default loading rules to get the default kubeconfig:<ul>
</li>
<li>kubeconfig<ul>
<li>if <code>--[update|merge]-default-kubeconfig</code> is set, we use the default loading rules to get the default kubeconfig:<ul>
<li>First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified)</li>
<li>First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified)</li>
<li>Second: default kubeconfig in home directory (e.g. <code>$HOME/.kube/config</code>)</li>
<li>Second: default kubeconfig in home directory (e.g. <code>$HOME/.kube/config</code>)</li>
</ul>
</ul>
</li>
</li>
</ul>
<li>Networking</li>
</li>
<li><ahref="./networking">by default, k3d creates a new (docker) network for every cluster</a></li>
</ul>
</ul>
@ -545,7 +542,7 @@
<divclass="md-source-date">
<divclass="md-source-date">
<small>
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">July 14, 2020</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">January 5, 2021</span>
--version <spanclass="c1"># show k3d and k3s version</span>
--version <spanclass="c1"># show k3d and k3s version</span>
-h, --help <spanclass="c1"># show help text</span>
-h, --help <spanclass="c1"># GLOBAL: show help text</span>
version <spanclass="c1"># show k3d and k3s version</span>
<spanclass="nb">help</span><spanclass="o">[</span>COMMAND<spanclass="o">]</span><spanclass="c1"># show help text for any command</span>
completion <spanclass="o">[</span>bash <spanclass="p">|</span> zsh <spanclass="p">|</span><spanclass="o">(</span>psh <spanclass="p">|</span> powershell<spanclass="o">)]</span><spanclass="c1"># generate completion scripts for common shells</span>
cluster <spanclass="o">[</span>CLUSTERNAME<spanclass="o">]</span><spanclass="c1"># default cluster name is 'k3s-default'</span>
cluster <spanclass="o">[</span>CLUSTERNAME<spanclass="o">]</span><spanclass="c1"># default cluster name is 'k3s-default'</span>
create
create
--api-port <spanclass="c1"># specify the port on which the cluster will be accessible (e.g. via kubectl)</span>
-a, --agents <spanclass="c1"># specify how many agent nodes you want to create (integer, default: 0)</span>
-i, --image <spanclass="c1"># specify which k3s image should be used for the nodes</span>
--api-port <spanclass="c1"># specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random)</span>
--k3s-agent-arg <spanclass="c1"># add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)</span>
-c, --config <spanclass="c1"># use a config file (format 'PATH')</span>
--k3s-server-arg <spanclass="c1"># add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help)</span>
-e, --env <spanclass="c1"># add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)</span>
-s, --servers <spanclass="c1"># specify how many server nodes you want to create</span>
--gpus <spanclass="c1"># [from docker CLI] add GPU devices to the node containers (string, e.g. 'all')</span>
--network <spanclass="c1"># specify a network you want to connect to</span>
-i, --image <spanclass="c1"># specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)</span>
--no-hostip <spanclass="c1"># disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDN</span>
--k3s-agent-arg <spanclass="c1"># add additional arguments to the k3s agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)</span>
--no-image-volume <spanclass="c1"># disable the creation of a volume for storing images (used for the 'k3d load image' command)</span>
--k3s-server-arg <spanclass="c1"># add additional arguments to the k3s server (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help)</span>
--no-lb <spanclass="c1"># disable the creation of a LoadBalancer in front of the server nodes</span>
--kubeconfig-switch-context <spanclass="c1"># (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true)</span>
--no-rollback <spanclass="c1"># disable the automatic rollback actions, if anything goes wrong</span>
--kubeconfig-update-default <spanclass="c1"># enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true)</span>
-p, --port <spanclass="c1"># add some more port mappings</span>
-l, --label <spanclass="c1"># add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)</span>
--token <spanclass="c1"># specify a cluster token (default: auto-generated)</span>
--network <spanclass="c1"># specify an existing (docker) network you want to connect to (string)</span>
--timeout <spanclass="c1"># specify a timeout, after which the cluster creation will be interrupted and changes rolled back</span>
--no-hostip <spanclass="c1"># disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false)</span>
--update-default-kubeconfig <spanclass="c1"># enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true')</span>
--no-image-volume <spanclass="c1"># disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false)</span>
--switch-context <spanclass="c1"># (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context</span>
--no-lb <spanclass="c1"># disable the creation of a load balancer in front of the server nodes (default: false)</span>
--no-rollback <spanclass="c1"># disable the automatic rollback actions, if anything goes wrong (default: false)</span>
--wait <spanclass="c1"># enable waiting for all server nodes to be ready before returning</span>
-p, --port <spanclass="c1"># add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times)</span>
-a, --agents <spanclass="c1"># specify how many agent nodes you want to create</span>
--registry-create <spanclass="c1"># create a new (docker) registry dedicated for this cluster (default: false)</span>
-e, --env <spanclass="c1"># add environment variables to the node containers</span>
--registry-use <spanclass="c1"># use an existing local (docker) registry with this cluster (string, use multiple times)</span>
-s, --servers <spanclass="c1"># specify how many server nodes you want to create (integer, default: 1)</span>
--token <spanclass="c1"># specify a cluster token (string, default: auto-generated)</span>
--timeout <spanclass="c1"># specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s')</span>
-v, --volume <spanclass="c1"># specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times)</span>
--wait <spanclass="c1"># enable waiting for all server nodes to be ready before returning (default: true)</span>
start CLUSTERNAME <spanclass="c1"># start a (stopped) cluster</span>
start CLUSTERNAME <spanclass="c1"># start a (stopped) cluster</span>
-a, --all <spanclass="c1"># start all clusters</span>
-a, --all <spanclass="c1"># start all clusters (default: false)</span>
--wait <spanclass="c1"># wait for all servers and server-loadbalancer to be up before returning</span>
--wait <spanclass="c1"># wait for all servers and server-loadbalancer to be up before returning (default: true)</span>
--timeout <spanclass="c1"># maximum waiting time for '--wait' before canceling/returning</span>
--timeout <spanclass="c1"># maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s')</span>
stop CLUSTERNAME <spanclass="c1"># stop a cluster</span>
stop CLUSTERNAME <spanclass="c1"># stop a cluster</span>
-a, --all <spanclass="c1"># stop all clusters</span>
-a, --all <spanclass="c1"># stop all clusters (default: false)</span>
delete CLUSTERNAME <spanclass="c1"># delete an existing cluster</span>
delete CLUSTERNAME <spanclass="c1"># delete an existing cluster</span>
-a, --all <spanclass="c1"># delete all existing clusters</span>
-a, --all <spanclass="c1"># delete all existing clusters (default: false)</span>
list <spanclass="o">[</span>CLUSTERNAME <spanclass="o">[</span>CLUSTERNAME ...<spanclass="o">]]</span>
list <spanclass="o">[</span>CLUSTERNAME <spanclass="o">[</span>CLUSTERNAME ...<spanclass="o">]]</span>
--no-headers <spanclass="c1"># do not print headers</span>
--no-headers <spanclass="c1"># do not print headers (default: false)</span>
--token <spanclass="c1"># show column with cluster tokens</span>
--token <spanclass="c1"># show column with cluster tokens (default: false)</span>
-o, --output <spanclass="c1"># format the output (format: 'json|yaml')</span>
completion <spanclass="o">[</span>bash <spanclass="p">|</span> zsh <spanclass="p">|</span> fish <spanclass="p">|</span><spanclass="o">(</span>psh <spanclass="p">|</span> powershell<spanclass="o">)]</span><spanclass="c1"># generate completion scripts for common shells</span>
config
init <spanclass="c1"># write a default k3d config (as a starting point)</span>
-f, --force <spanclass="c1"># force overwrite target file (default: false)</span>
-o, --output <spanclass="c1"># file to write to (string, default "k3d-default.yaml")</span>
<spanclass="nb">help</span><spanclass="o">[</span>COMMAND<spanclass="o">]</span><spanclass="c1"># show help text for any command</span>
image
import <spanclass="o">[</span>IMAGE <spanclass="p">|</span> ARCHIVE <spanclass="o">[</span>IMAGE <spanclass="p">|</span> ARCHIVE ...<spanclass="o">]]</span><spanclass="c1"># Load one or more images from the local runtime environment or tar-archives into k3d clusters</span>
-c, --cluster <spanclass="c1"># clusters to load the image into (string, use flag multiple times, default: k3s-default)</span>
-k, --keep-tarball <spanclass="c1"># do not delete the image tarball from the shared volume after completion (default: false)</span>
kubeconfig
get <spanclass="o">(</span>CLUSTERNAME <spanclass="o">[</span>CLUSTERNAME ...<spanclass="o">]</span><spanclass="p">|</span> --all<spanclass="o">)</span><spanclass="c1"># get kubeconfig from cluster(s) and write it to stdout</span>
-a, --all <spanclass="c1"># get kubeconfigs from all clusters (default: false)</span>
merge <spanclass="p">|</span> write <spanclass="o">(</span>CLUSTERNAME <spanclass="o">[</span>CLUSTERNAME ...<spanclass="o">]</span><spanclass="p">|</span> --all<spanclass="o">)</span><spanclass="c1"># get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file</span>
-a, --all <spanclass="c1"># get kubeconfigs from all clusters (default: false)</span>
-s, --kubeconfig-switch-context <spanclass="c1"># switch current-context in kubeconfig to the new context (default: true)</span>
-d, --kubeconfig-merge-default <spanclass="c1"># update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)</span>
-o, --output <spanclass="c1"># specify the output file where the kubeconfig should be written to (string)</span>
create NODENAME <spanclass="c1"># Create new nodes (and add them to existing clusters)</span>
create NODENAME <spanclass="c1"># Create new nodes (and add them to existing clusters)</span>
-c, --cluster <spanclass="c1"># specify the cluster that the node shall connect to</span>
-c, --cluster <spanclass="c1"># specify the cluster that the node shall connect to (string, default: k3s-default)</span>
-i, --image <spanclass="c1"># specify which k3s image should be used for the node(s)</span>
-i, --image <spanclass="c1"># specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)</span>
--replicas <spanclass="c1"># specify how many replicas you want to create with this spec</span>
--replicas <spanclass="c1"># specify how many replicas you want to create with this spec (integer, default: 1)</span>
--role <spanclass="c1"># specify the node role</span>
--role <spanclass="c1"># specify the node role (string, format: 'agent|server', default: agent)</span>
--wait <spanclass="c1"># wait for the node to be up and running before returning</span>
--timeout<spanclass="c1"># specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s')</span>
--timeout <spanclass="c1"># specify a timeout duration, after which the node creation will be interrupted, if not done yet</span>
--wait <spanclass="c1"># wait for the node to be up and running before returning (default: true)</span>
start NODENAME <spanclass="c1"># start a (stopped) node</span>
start NODENAME <spanclass="c1"># start a (stopped) node</span>
stop NODENAME <spanclass="c1"># stop a node</span>
stop NODENAME <spanclass="c1"># stop a node</span>
delete NODENAME <spanclass="c1"># delete an existing node</span>
delete NODENAME <spanclass="c1"># delete an existing node</span>
-a, --all <spanclass="c1"># delete all existing nodes</span>
-a, --all <spanclass="c1"># delete all existing nodes (default: false)</span>
list NODENAME
list NODENAME
--no-headers <spanclass="c1"># do not print headers</span>
--no-headers <spanclass="c1"># do not print headers (default: false)</span>
kubeconfig
registry
get <spanclass="o">(</span>CLUSTERNAME <spanclass="o">[</span>CLUSTERNAME ...<spanclass="o">]</span><spanclass="p">|</span> --all<spanclass="o">)</span><spanclass="c1"># get kubeconfig from cluster(s) and write it to stdout</span>
create REGISTRYNAME
-a, --all <spanclass="c1"># get kubeconfigs from all clusters</span>
-i, --image <spanclass="c1"># specify image used for the registry (string, default: "docker.io/library/registry:2")</span>
merge <spanclass="p">|</span> write <spanclass="o">(</span>CLUSTERNAME <spanclass="o">[</span>CLUSTERNAME ...<spanclass="o">]</span><spanclass="p">|</span> --all<spanclass="o">)</span><spanclass="c1"># get kubeconfig from cluster(s) and merge it/them into into a file in $HOME/.k3d (or whatever you specify via the flags)</span>
-p, --port <spanclass="c1"># select host port to map to (format: '[HOST:]HOSTPORT', default: 'random')</span>
-a, --all <spanclass="c1"># get kubeconfigs from all clusters</span>
delete REGISTRYNAME
--output <spanclass="c1"># specify the output file where the kubeconfig should be written to</span>
-a, --all <spanclass="c1"># delete all existing registries (default: false)</span>
version <spanclass="c1"># show k3d and k3s version</span>
-d, --merge-default-kubeconfig <spanclass="c1"># update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)</span>
image
import <spanclass="o">[</span>IMAGE <spanclass="p">|</span> ARCHIVE <spanclass="o">[</span>IMAGE <spanclass="p">|</span> ARCHIVE ...<spanclass="o">]]</span><spanclass="c1"># Load one or more images from the local runtime environment or tar-archives into k3d clusters</span>
-c, --cluster <spanclass="c1"># clusters to load the image into</span>
-k, --keep-tarball <spanclass="c1"># do not delete the image tarball from the shared volume after completion</span>
</code></pre></div>
</code></pre></div>
@ -591,7 +611,7 @@
<divclass="md-source-date">
<divclass="md-source-date">
<small>
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">November 23, 2020</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">January 5, 2021</span>
@ -691,7 +691,7 @@ The K3S container itself also needs to run with this runtime. If you are using D
<h2id="building-a-customized-k3s-image">Building a customized K3S image<aclass="headerlink"href="#building-a-customized-k3s-image"title="Permanent link">¶</a></h2>
<h2id="building-a-customized-k3s-image">Building a customized K3S image<aclass="headerlink"href="#building-a-customized-k3s-image"title="Permanent link">¶</a></h2>
<p>To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.</p>
<p>To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.</p>
<h3id="adapt-the-dockerfile">Adapt the Dockerfile<aclass="headerlink"href="#adapt-the-dockerfile"title="Permanent link">¶</a></h3>
<h3id="adapt-the-dockerfile">Adapt the Dockerfile<aclass="headerlink"href="#adapt-the-dockerfile"title="Permanent link">¶</a></h3>
This <ahref="cuda/Dockerfile">Dockerfile</a> is based on the <ahref="https://github.com/rancher/k3s/blob/master/package/Dockerfile">K3S Dockerfile</a>.
<p>This <ahref="cuda/Dockerfile">Dockerfile</a> is based on the <ahref="https://github.com/rancher/k3s/blob/master/package/Dockerfile">K3S Dockerfile</a>.
The following changes are applied:
The following changes are applied:</p>
1. Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed
<ol>
2. Add a custom containerd <code>config.toml</code> template to add the NVIDIA Container Runtime. This replaces the default <code>runc</code> runtime
<li>Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed</li>
3. Add a manifest for the NVIDIA driver plugin for Kubernetes</p>
<li>Add a custom containerd <code>config.toml</code> template to add the NVIDIA Container Runtime. This replaces the default <code>runc</code> runtime</li>
<li>Add a manifest for the NVIDIA driver plugin for Kubernetes</li>
<p>We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a <ahref="config.toml.tmpl">config.toml.tmpl</a> file. More information can be found on the <ahref="https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd">K3S site</a>.</p>
<p>We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a <ahref="config.toml.tmpl">config.toml.tmpl</a> file. More information can be found on the <ahref="https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd">K3S site</a>.</p>
<p>To enable NVIDIA GPU support on Kubernetes you also need to install the <ahref="https://github.com/NVIDIA/k8s-device-plugin">NVIDIA device plugin</a>. The device plugin is a daemonset and allows you to automatically:
<p>To enable NVIDIA GPU support on Kubernetes you also need to install the <ahref="https://github.com/NVIDIA/k8s-device-plugin">NVIDIA device plugin</a>. The device plugin is a deamonset and allows you to automatically:</p>
* Expose the number of GPUs on each nodes of your cluster
<ul>
* Keep track of the health of your GPUs
<li>Expose the number of GPUs on each nodes of your cluster</li>
* Run GPU enabled containers in your Kubernetes cluster.</p>
<li>Keep track of the health of your GPUs</li>
<li>Run GPU enabled containers in your Kubernetes cluster.</li>
<p>The <code>build.sh</code> files takes the K3S git tag as argument, it defaults to <code>v1.18.10+k3s1</code>. The script performs the following steps:
<p>The <code>build.sh</code> files takes the K3S git tag as argument, it defaults to <code>v1.18.10+k3s1</code>. The script performs the following steps:</p>
* build the custom K3S Docker image</p>
<ul>
<li>pulls K3S</li>
<li>builds K3S</li>
<li>build the custom K3S Docker image</li>
</ul>
<p>The resulting image is tagged as k3s-gpu:<version tag>. The version tag is the git tag but the ‘+’ sign is replaced with a ‘-‘.</p>
<p>The resulting image is tagged as k3s-gpu:<version tag>. The version tag is the git tag but the ‘+’ sign is replaced with a ‘-‘.</p>
<h2id="run-and-test-the-custom-image-with-docker">Run and test the custom image with Docker<aclass="headerlink"href="#run-and-test-the-custom-image-with-docker"title="Permanent link">¶</a></h2>
<h2id="run-and-test-the-custom-image-with-docker">Run and test the custom image with Docker<aclass="headerlink"href="#run-and-test-the-custom-image-with-docker"title="Permanent link">¶</a></h2>
<p>You can run a container based on the new image with Docker:
<p>You can run a container based on the new image with Docker:</p>
<divclass="highlight"><pre><span></span><code>docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1
<divclass="highlight"><pre><span></span><code>docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1
</code></pre></div>
</code></pre></div>
Deploy a <ahref="cuda-vector-add.yaml">test pod</a>:
<p>Deploy a <ahref="cuda-vector-add.yaml">test pod</a>:</p>
<h2id="run-and-test-the-custom-image-with-k3d">Run and test the custom image with k3d<aclass="headerlink"href="#run-and-test-the-custom-image-with-k3d"title="Permanent link">¶</a></h2>
<h2id="run-and-test-the-custom-image-with-k3d">Run and test the custom image with k3d<aclass="headerlink"href="#run-and-test-the-custom-image-with-k3d"title="Permanent link">¶</a></h2>
<p>Tou can use the image with k3d:
<p>Tou can use the image with k3d:</p>
<divclass="highlight"><pre><span></span><code>k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all
<divclass="highlight"><pre><span></span><code>k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all
</code></pre></div>
</code></pre></div>
Deploy a <ahref="cuda-vector-add.yaml">test pod</a>:
<p>Deploy a <ahref="cuda-vector-add.yaml">test pod</a>:</p>
<li>This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the <ahref="https://docs.nvidia.com/cuda/wsl-user-guide/index.html#known-limitations">CUDA on WSL User Guide</a>.</li>
<li>This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the <ahref="https://docs.nvidia.com/cuda/wsl-user-guide/index.html#known-limitations">CUDA on WSL User Guide</a>.</li>
<h2id="1-via-ingress">1. via Ingress<aclass="headerlink"href="#1-via-ingress"title="Permanent link">¶</a></h2>
<h2id="1-via-ingress-recommended">1. via Ingress (recommended)<aclass="headerlink"href="#1-via-ingress-recommended"title="Permanent link">¶</a></h2>
<p>In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
<p>In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
Therefore, we have to create the cluster in a way, that the internal port 80 (where the <code>traefik</code> ingress controller is listening on) is exposed on the host system.</p>
Therefore, we have to create the cluster in a way, that the internal port 80 (where the <code>traefik</code> ingress controller is listening on) is exposed on the host system.</p>
<ol>
<ol>
@ -598,7 +598,7 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
</div>
</div>
</li>
</li>
<li>
<li>
<p>Get the kubeconfig file</p>
<p>Get the kubeconfig file (redundant, as <code>k3d cluster create</code> already merges it into your default kubeconfig file)</p>
@ -649,6 +649,7 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
<li>
<li>
<p><strong>Note</strong>: You may as well expose the whole NodePort range from the very beginning, e.g. via <code>k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"</code> (See <ahref="https://www.youtube.com/watch?v=5HaU6338lAk">this video from @portainer</a>)</p>
<p><strong>Note</strong>: You may as well expose the whole NodePort range from the very beginning, e.g. via <code>k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"</code> (See <ahref="https://www.youtube.com/watch?v=5HaU6338lAk">this video from @portainer</a>)</p>
</li>
</li>
<li><strong>Warning</strong>: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!</li>
</ul>
</ul>
</li>
</li>
</ol>
</ol>
@ -687,7 +688,7 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
<divclass="md-source-date">
<divclass="md-source-date">
<small>
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">November 10, 2020</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">January 5, 2021</span>
<p>The k3d-managed registry has not yet been ported from v1.x to v3.x</p>
<p>The k3d-managed registry is available again as of k3d v4.0.0 (January 2021)</p>
</div>
</div>
<h3id="using-your-own-local-registry">Using your own local registry<aclass="headerlink"href="#using-your-own-local-registry"title="Permanent link">¶</a></h3>
<h4id="create-a-dedicated-registry-together-with-your-cluster">Create a dedicated registry together with your cluster<aclass="headerlink"href="#create-a-dedicated-registry-together-with-your-cluster"title="Permanent link">¶</a></h4>
<ol>
<li><codeclass="highlight">k3d cluster create mycluster --registry-create</code>: This creates your cluster <code>mycluster</code> together with a registry container called <code>k3d-mycluster-registry</code><ul>
<li>k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the <code>registries.yaml</code> file)</li>
<li>the port, which the registry is listening on will be mapped to a random port on your host system</li>
</ul>
</li>
<li>Check the k3d command output or <codeclass="highlight">docker ps -f <spanclass="nv">name</span><spanclass="o">=</span>k3d-mycluster-registry</code> to find the exposed port (let’s use <code>12345</code> here)</li>
<li>Pull some image (optional) <codeclass="highlight">docker pull alpine:latest</code>, re-tag it to reference your newly created registry <codeclass="highlight">docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local</code> and push it <codeclass="highlight">docker push k3d-mycluster-registry:12345/testimage:local</code></li>
<li>Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: <codeclass="highlight">kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null</code> (creates a container that will not do anything but keep on running)</li>
</ol>
<h4id="create-a-customized-k3d-managed-registry">Create a customized k3d-managed registry<aclass="headerlink"href="#create-a-customized-k3d-managed-registry"title="Permanent link">¶</a></h4>
<ol>
<li><codeclass="highlight">k3d registry create myregistry.localhost --port <spanclass="m">5111</span></code> creates a new registry called <code>myregistry.localhost</code> (could be used with automatic resolution of <code>*.localhost</code>, see next section)</li>
<li><codeclass="highlight">k3d cluster create newcluster --registry-use k3d-myregistry.localhost:5111</code> (make sure you use the <code>k3d-</code> prefix here) creates a new cluster set up to us that registry</li>
<li>continue with step 3 and 4 from the last section for testing</li>
</ol>
<h3id="using-your-own-not-k3d-managed-local-registry">Using your own (not k3d-managed) local registry<aclass="headerlink"href="#using-your-own-not-k3d-managed-local-registry"title="Permanent link">¶</a></h3>
<p>You can start your own local registry it with some <code>docker</code> commands, like:</p>
<p>You can start your own local registry it with some <code>docker</code> commands, like:</p>
<p>We will verify these two things for a local registry (located at <code>registry.localhost:5000</code>) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker’s documentation for this).</p>
<p>We will verify these two things for a local registry (located at <code>registry.localhost:5000</code>) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker’s documentation for this).</p>
<p>First, we can download some image (like <code>nginx</code>) and push it to our local registry with:</p>
<p>First, we can download some image (like <code>nginx</code>) and push it to our local registry with:</p>
<li><em>Note:</em> this won’t switch the current-context (append <code>--switch-context</code> to do so)</li>
<li><em>Note:</em> this won’t switch the current-context (append <code>--kubeconfig-switch-context</code> to do so)</li>
</ul>
</ul>
</li>
</li>
</ul>
</ul>
@ -631,16 +631,16 @@ To get a kubeconfig set up for you to connect to a k3d cluster, you can go diffe
<pclass="admonition-title">Switching the current context</p>
<pclass="admonition-title">Switching the current context</p>
<p>None of the above options switch the current-context by default.
<p>None of the above options switch the current-context by default.
This is intended to be least intrusive, since the current-context has a global effect.
This is intended to be least intrusive, since the current-context has a global effect.
You can switch the current-context directly with the <code>kubeconfig merge</code> command by adding the <code>--switch-context</code> flag.</p>
You can switch the current-context directly with the <code>kubeconfig merge</code> command by adding the <code>--kubeconfig-switch-context</code> flag.</p>
</div>
</div>
<h2id="removing-cluster-details-from-the-kubeconfig">Removing cluster details from the kubeconfig<aclass="headerlink"href="#removing-cluster-details-from-the-kubeconfig"title="Permanent link">¶</a></h2>
<h2id="removing-cluster-details-from-the-kubeconfig">Removing cluster details from the kubeconfig<aclass="headerlink"href="#removing-cluster-details-from-the-kubeconfig"title="Permanent link">¶</a></h2>
<p><codeclass="highlight">k3d cluster delete mycluster</code> will always remove the details for <code>mycluster</code> from the default kubeconfig.
<p><codeclass="highlight">k3d cluster delete mycluster</code> will always remove the details for <code>mycluster</code> from the default kubeconfig.
It will also delete the respective kubeconfig file in <code>$HOME/.k3d/</code> if it exists.</p>
It will also delete the respective kubeconfig file in <code>$HOME/.k3d/</code> if it exists.</p>
<p><code>k3d kubeconfig merge</code> let’s you specify one or more clusters via arguments <em>or</em> all via <code>--all</code>.
<p><code>k3d kubeconfig merge</code> let’s you specify one or more clusters via arguments <em>or</em> all via <code>--all</code>.
All kubeconfigs will then be merged into a single file if <code>--merge-default-kubeconfig</code> or <code>--output</code> is specified.
All kubeconfigs will then be merged into a single file if <code>--kubeconfig-merge-default</code> or <code>--output</code> is specified.
If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. <code>$HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml</code>) will be returned.
If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. <code>$HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml</code>) will be returned.
Note, that with multiple cluster specified, the <code>--switch-context</code> flag will change the current context to the cluster which was last in the list.</p>
Note, that with multiple cluster specified, the <code>--kubeconfig-switch-context</code> flag will change the current context to the cluster which was last in the list.</p>
@ -649,7 +649,7 @@ Note, that with multiple cluster specified, the <code>--switch-context</code> fl
<divclass="md-source-date">
<divclass="md-source-date">
<small>
<small>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">August 5, 2020</span>
Last update: <spanclass="git-revision-date-localized-plugin git-revision-date-localized-plugin-date">January 5, 2021</span>