From 68770ec23f5c985ec98154239ae0f8e8357e1407 Mon Sep 17 00:00:00 2001 From: iwilltry42 Date: Thu, 14 Jan 2021 12:17:41 +0000 Subject: [PATCH] commit 5092a90d56fb2fa1df3c42dc49aa7977d4968fab Merge: 4b1148f c6df144 Author: Thorsten Klein Date: Thu Jan 14 13:16:05 2021 +0100 [NEW VERSION v4] Merge pull request #447 from rancher/main-v4 --- faq/v1vsv3-comparison/index.html | 4 +- index.html | 22 ++-- internals/defaults/index.html | 21 ++-- internals/networking/index.html | 6 +- search/search_index.json | 2 +- sitemap.xml | 24 ++-- sitemap.xml.gz | Bin 315 -> 315 bytes usage/commands/index.html | 128 +++++++++++--------- usage/guides/cuda/build.sh | 2 +- usage/guides/cuda/index.html | 84 +++++++------ usage/guides/exposing_services/index.html | 15 +-- usage/guides/registries/index.html | 139 +++++++++++++++------- usage/kubeconfig/index.html | 22 ++-- 13 files changed, 272 insertions(+), 197 deletions(-) diff --git a/faq/v1vsv3-comparison/index.html b/faq/v1vsv3-comparison/index.html index 2456b548..2570c1c1 100644 --- a/faq/v1vsv3-comparison/index.html +++ b/faq/v1vsv3-comparison/index.html @@ -581,7 +581,7 @@ - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - - --enable-registry -> planned (possible consolidation into less registry-related commands?) + - --enable-registry -> coming in v4.0.0 (2021) as `--registry-create` and `--registry-use` - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD @@ -626,7 +626,7 @@
- Last update: July 14, 2020 + Last update: January 5, 2021
diff --git a/index.html b/index.html index f48f3b2e..b1288110 100644 --- a/index.html +++ b/index.html @@ -628,7 +628,7 @@

Overview

k3d

-

This page is targeting k3d v3.0.0 and newer!

+

This page is targeting k3d v4.0.0 and newer!

k3d is a lightweight wrapper to run k3s (Rancher Lab’s minimal Kubernetes distribution) in docker.

k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.

View a quick demo

@@ -687,24 +687,18 @@

Installation

You have several options there:

    -
  • use the install script to grab the latest release:
      +
    • use the install script to grab the latest release:
    • wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
    • curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
    • -
    -
  • +
  • use the install script to grab a specific release (via TAG environment variable):
  • +
  • wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.0.0 bash
  • -

    use the install script to grab a specific release (via TAG environment variable):

    -
      -
    • wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v3.0.0 bash
    • -
    • curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v3.0.0 bash
    • -
    +

    curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v4.0.0 bash

  • use Homebrew: brew install k3d (Homebrew is available for MacOS and Linux)

    -
  • +
  • Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core
  • install via AUR package rancher-k3d-bin: yay -S rancher-k3d-bin
  • grab a release from the release tab and install it yourself.
  • install via go: go install github.com/rancher/k3d (Note: this will give you unreleased/bleeding-edge changes)
  • @@ -718,7 +712,7 @@
    k3d cluster create mycluster
     

    Get the new cluster’s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME/.kube/config) and directly switch to the new context:

    -
    k3d kubeconfig merge mycluster --switch-context
    +
    k3d kubeconfig merge mycluster --kubeconfig-switch-context
     

    Use the new cluster with kubectl, e.g.:

    kubectl get nodes
    @@ -735,7 +729,7 @@
     
    - Last update: January 13, 2021 + Last update: January 14, 2021
    diff --git a/internals/defaults/index.html b/internals/defaults/index.html index 74f0d7fc..78be1509 100644 --- a/internals/defaults/index.html +++ b/internals/defaults/index.html @@ -516,26 +516,23 @@

    Defaults

      -
    • multiple server nodes
        +
      • multiple server nodes
      • by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node
        • the initializing server node will have the --cluster-init flag appended
        • all other server nodes will refer to the initializing server node via --server https://<init-node>:6443
      • -
      -
    • -
    • API-Ports
        -
      • by default, we don’t expose any API-Port (no host port mapping)
      • -
      -
    • -
    • kubeconfig
        -
      • if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig:
          +
        • API-Ports
        • +
        • by default, we expose the API-Port (6443) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)
        • +
        • port 6443 of the loadbalancer is then mapped to a specific (--api-port flag) or a random (default) port on the host system
        • +
        • kubeconfig
        • +
        • if --kubeconfig-update-default is set, we use the default loading rules to get the default kubeconfig:
          • First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified)
          • Second: default kubeconfig in home directory (e.g. $HOME/.kube/config)
        • -
        -
      • +
      • Networking
      • +
      • by default, k3d creates a new (docker) network for every cluster
      @@ -545,7 +542,7 @@
      - Last update: July 14, 2020 + Last update: January 5, 2021
      diff --git a/internals/networking/index.html b/internals/networking/index.html index 4c7f6eac..22bd29da 100644 --- a/internals/networking/index.html +++ b/internals/networking/index.html @@ -628,11 +628,9 @@

      Networking

      Introduction

      By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. @@ -656,7 +654,7 @@ run into issues with grabbing certificates from the API-Server. Single-Node clus

      - Last update: July 14, 2020 + Last update: January 5, 2021
      diff --git a/search/search_index.json b/search/search_index.json index 45526d63..567dbd54 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 This page is targeting k3d v3.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo Learning \u00b6 Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo Requirements \u00b6 docker Releases \u00b6 Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - - Chocolatey stable - Installation \u00b6 You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes) use arkade : arkade get k3d use asdf : asdf plugin-add k3d , then asdf install k3d with = latest or 3.x.x for a specific version (maintained by spencergilbert/asdf-k3d ) use Chocolatey : choco install k3d (Chocolatey package manager is available for Windows) package source can be found in erwinkersten/chocolatey-packages Quick Start \u00b6 Create a cluster named mycluster with just a single server node: k3d cluster create mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d kubeconfig merge mycluster --switch-context Use the new cluster with kubectl , e.g.: kubectl get nodes Related Projects \u00b6 k3x : a graphics interface (for Linux) to k3d.","title":"Overview"},{"location":"#overview","text":"This page is targeting k3d v3.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo","title":"Overview"},{"location":"#learning","text":"Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo","title":"Learning"},{"location":"#requirements","text":"docker","title":"Requirements"},{"location":"#releases","text":"Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - - Chocolatey stable -","title":"Releases"},{"location":"#installation","text":"You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes) use arkade : arkade get k3d use asdf : asdf plugin-add k3d , then asdf install k3d with = latest or 3.x.x for a specific version (maintained by spencergilbert/asdf-k3d ) use Chocolatey : choco install k3d (Chocolatey package manager is available for Windows) package source can be found in erwinkersten/chocolatey-packages","title":"Installation"},{"location":"#quick-start","text":"Create a cluster named mycluster with just a single server node: k3d cluster create mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d kubeconfig merge mycluster --switch-context Use the new cluster with kubectl , e.g.: kubectl get nodes","title":"Quick Start"},{"location":"#related-projects","text":"k3x : a graphics interface (for Linux) to k3d.","title":"Related Projects"},{"location":"faq/faq/","text":"FAQ / Nice to know \u00b6 Issues with BTRFS \u00b6 As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper Issues with ZFS \u00b6 k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3 ) fails, because the initializing server node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374 Pods evicted due to lack of disk space \u00b6 Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%' Restarting a multi-server cluster or the initializing server node fails \u00b6 What you do: You create a cluster with more than one server node and later, you either stop server-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing server node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262 Passing additional arguments/flags to k3s (and on to e.g. the kube-apiserver) \u00b6 The Problem: Passing a feature flag to the Kubernetes API Server running inside k3s. Example: you want to enable the EphemeralContainers feature flag in Kubernetes Solution: k3d cluster create --k3s-server-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true' Note: Be aware of where the flags require dashes ( -- ) and where not. the k3s flag ( --kube-apiserver-arg ) has the dashes the kube-apiserver flag feature-gates doesn\u2019t have them (k3s adds them internally) Second example: k3d cluster create k3d-one --k3s-server-arg --cluster-cidr = \"10.118.0.0/17\" --k3s-server-arg --service-cidr = \"10.118.128.0/17\" --k3s-server-arg --disable = servicelb --k3s-server-arg --disable = traefik --verbose Note: There are many ways to use the \" and ' quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands How to access services (like a database) running on my Docker Host Machine \u00b6 As of version v3.1.0, we\u2019re injecting the host.k3d.internal entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as host.k3d.internal","title":"FAQ / Nice to know"},{"location":"faq/faq/#faq-nice-to-know","text":"","title":"FAQ / Nice to know"},{"location":"faq/faq/#issues-with-btrfs","text":"As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper","title":"Issues with BTRFS"},{"location":"faq/faq/#issues-with-zfs","text":"k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3 ) fails, because the initializing server node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374","title":"Issues with ZFS"},{"location":"faq/faq/#pods-evicted-due-to-lack-of-disk-space","text":"Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'","title":"Pods evicted due to lack of disk space"},{"location":"faq/faq/#restarting-a-multi-server-cluster-or-the-initializing-server-node-fails","text":"What you do: You create a cluster with more than one server node and later, you either stop server-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing server node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262","title":"Restarting a multi-server cluster or the initializing server node fails"},{"location":"faq/faq/#passing-additional-argumentsflags-to-k3s-and-on-to-eg-the-kube-apiserver","text":"The Problem: Passing a feature flag to the Kubernetes API Server running inside k3s. Example: you want to enable the EphemeralContainers feature flag in Kubernetes Solution: k3d cluster create --k3s-server-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true' Note: Be aware of where the flags require dashes ( -- ) and where not. the k3s flag ( --kube-apiserver-arg ) has the dashes the kube-apiserver flag feature-gates doesn\u2019t have them (k3s adds them internally) Second example: k3d cluster create k3d-one --k3s-server-arg --cluster-cidr = \"10.118.0.0/17\" --k3s-server-arg --service-cidr = \"10.118.128.0/17\" --k3s-server-arg --disable = servicelb --k3s-server-arg --disable = traefik --verbose Note: There are many ways to use the \" and ' quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands","title":"Passing additional arguments/flags to k3s (and on to e.g. the kube-apiserver)"},{"location":"faq/faq/#how-to-access-services-like-a-database-running-on-my-docker-host-machine","text":"As of version v3.1.0, we\u2019re injecting the host.k3d.internal entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as host.k3d.internal","title":"How to access services (like a database) running on my Docker Host Machine"},{"location":"faq/v1vsv3-comparison/","text":"Feature Comparison: v1 vs. v3 \u00b6 v1.x feature -> implementation in v3 \u00b6 - k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#feature-comparison-v1-vs-v3","text":"","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#v1x-feature-implementation-in-v3","text":"- k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"v1.x feature -> implementation in v3"},{"location":"internals/defaults/","text":"Defaults \u00b6 multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/defaults/#defaults","text":"multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/networking/","text":"Networking \u00b6 Related issues: rancher/k3d #220 Introduction \u00b6 By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle. Connecting to docker \u201cinternal\u201d/pre-defined networks \u00b6 host network \u00b6 When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node. bridge network \u00b6 By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though. none \u201cnetwork\u201d \u00b6 Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"Networking"},{"location":"internals/networking/#networking","text":"Related issues: rancher/k3d #220","title":"Networking"},{"location":"internals/networking/#introduction","text":"By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.","title":"Introduction"},{"location":"internals/networking/#connecting-to-docker-internalpre-defined-networks","text":"","title":"Connecting to docker \"internal\"/pre-defined networks"},{"location":"internals/networking/#host-network","text":"When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node.","title":"host network"},{"location":"internals/networking/#bridge-network","text":"By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.","title":"bridge network"},{"location":"internals/networking/#none-network","text":"Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"none \"network\""},{"location":"usage/commands/","text":"Command Tree \u00b6 k3d --verbose # enable verbose (debug) logging (default: false) --version # show k3d and k3s version -h, --help # show help text version # show k3d and k3s version help [ COMMAND ] # show help text for any command completion [ bash | zsh | ( psh | powershell )] # generate completion scripts for common shells cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -s, --servers # specify how many server nodes you want to create --network # specify a network you want to connect to --no-hostip # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDN --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) --no-lb # disable the creation of a LoadBalancer in front of the server nodes --no-rollback # disable the automatic rollback actions, if anything goes wrong -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all server nodes to be ready before returning -a, --agents # specify how many agent nodes you want to create -e, --env # add environment variables to the node containers start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all servers and server-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes list NODENAME --no-headers # do not print headers kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and write it to stdout -a, --all # get kubeconfigs from all clusters merge | write ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into into a file in $HOME/.k3d (or whatever you specify via the flags) -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch-context # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) -d, --merge-default-kubeconfig # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion","title":"Command Tree"},{"location":"usage/commands/#command-tree","text":"k3d --verbose # enable verbose (debug) logging (default: false) --version # show k3d and k3s version -h, --help # show help text version # show k3d and k3s version help [ COMMAND ] # show help text for any command completion [ bash | zsh | ( psh | powershell )] # generate completion scripts for common shells cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -s, --servers # specify how many server nodes you want to create --network # specify a network you want to connect to --no-hostip # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDN --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) --no-lb # disable the creation of a LoadBalancer in front of the server nodes --no-rollback # disable the automatic rollback actions, if anything goes wrong -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all server nodes to be ready before returning -a, --agents # specify how many agent nodes you want to create -e, --env # add environment variables to the node containers start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all servers and server-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes list NODENAME --no-headers # do not print headers kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and write it to stdout -a, --all # get kubeconfigs from all clusters merge | write ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into into a file in $HOME/.k3d (or whatever you specify via the flags) -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch-context # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) -d, --merge-default-kubeconfig # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion","title":"Command Tree"},{"location":"usage/kubeconfig/","text":"Handling Kubeconfigs \u00b6 By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config ) Getting the kubeconfig for a newly created cluster \u00b6 Create a new kubeconfig file after cluster creation k3d kubeconfig write mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig write mycluster ) Note 2 : alternatively you can use k3d kubeconfig get mycluster > some-file.yaml Update your default kubeconfig upon cluster creation k3d cluster create mycluster --update-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --merge-default-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag. Removing cluster details from the kubeconfig \u00b6 k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists. Handling multiple clusters \u00b6 k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#handling-kubeconfigs","text":"By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config )","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","text":"Create a new kubeconfig file after cluster creation k3d kubeconfig write mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig write mycluster ) Note 2 : alternatively you can use k3d kubeconfig get mycluster > some-file.yaml Update your default kubeconfig upon cluster creation k3d cluster create mycluster --update-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --merge-default-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag.","title":"Getting the kubeconfig for a newly created cluster"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","text":"k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.","title":"Removing cluster details from the kubeconfig"},{"location":"usage/kubeconfig/#handling-multiple-clusters","text":"k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling multiple clusters"},{"location":"usage/multiserver/","text":"Creating multi-server clusters \u00b6 Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes. Embedded dqlite \u00b6 Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3 Adding server nodes to a running cluster \u00b6 In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#creating-multi-server-clusters","text":"Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#embedded-dqlite","text":"Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3","title":"Embedded dqlite"},{"location":"usage/multiserver/#adding-server-nodes-to-a-running-cluster","text":"In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Adding server nodes to a running cluster"},{"location":"usage/guides/calico/","text":"Use Calico instead of Flannel \u00b6 If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel. 1. Download and modify the Calico descriptor \u00b6 You can following the documentation And then you have to change the ConfigMap calico-config . On the cni_network_config add the entry for allowing IP forwarding \"container_settings\" : { \"allow_ip_forwarding\" : true } Or you can directly use this calico.yaml manifest 2. Create the cluster without flannel and with calico \u00b6 On the k3s cluster creation : - add the flag --flannel-backend=none . For this, on k3d you need to forward this flag to k3s with the option --k3s-server-arg . - mount ( --volume ) the calico descriptor in the auto deploy manifest directory of k3s /var/lib/rancher/k3s/server/manifests/ So the command of the cluster creation is (when you are at root of the k3d repository) k3d cluster create \" ${ clustername } \" --k3s-server-arg '--flannel-backend=none' --volume \" $( pwd ) /docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml\" In this example : - change \"${clustername}\" with the name of the cluster (or set a variable). - $(pwd)/docs/usage/guides/calico.yaml is the absolute path of the calico manifest, you can adapt it. You can add other options, see . The cluster will start without flannel and with Calico as CNI Plugin. For watching for the pod(s) deployment watch \"kubectl get pods -n kube-system\" You will have something like this at beginning (with the command line kubectl get pods -n kube-system ) NAME READY STATUS RESTARTS AGE helm-install-traefik-pn84f 0/1 Pending 0 3s calico-node-97rx8 0/1 Init:0/3 0 3s metrics-server-7566d596c8-hwnqq 0/1 Pending 0 2s calico-kube-controllers-58b656d69f-2z7cn 0/1 Pending 0 2s local-path-provisioner-6d59f47c7-rmswg 0/1 Pending 0 2s coredns-8655855d6-cxtnr 0/1 Pending 0 2s And when it finish to start NAME READY STATUS RESTARTS AGE metrics-server-7566d596c8-hwnqq 1/1 Running 0 56s calico-node-97rx8 1/1 Running 0 57s helm-install-traefik-pn84f 0/1 Completed 1 57s svclb-traefik-lmjr5 2/2 Running 0 28s calico-kube-controllers-58b656d69f-2z7cn 1/1 Running 0 56s local-path-provisioner-6d59f47c7-rmswg 1/1 Running 0 56s traefik-758cd5fc85-x8p57 1/1 Running 0 28s coredns-8655855d6-cxtnr 1/1 Running 0 56s Note : - you can use the auto deploy manifest or a kubectl apply depending on your needs - Calico is not as quick as Flannel (but it provides more features) References \u00b6 https://rancher.com/docs/k3s/latest/en/installation/network-options/ https://docs.projectcalico.org/getting-started/kubernetes/k3s/","title":"Use Calico instead of Flannel"},{"location":"usage/guides/calico/#use-calico-instead-of-flannel","text":"If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel.","title":"Use Calico instead of Flannel"},{"location":"usage/guides/calico/#1-download-and-modify-the-calico-descriptor","text":"You can following the documentation And then you have to change the ConfigMap calico-config . On the cni_network_config add the entry for allowing IP forwarding \"container_settings\" : { \"allow_ip_forwarding\" : true } Or you can directly use this calico.yaml manifest","title":"1. Download and modify the Calico descriptor"},{"location":"usage/guides/calico/#2-create-the-cluster-without-flannel-and-with-calico","text":"On the k3s cluster creation : - add the flag --flannel-backend=none . For this, on k3d you need to forward this flag to k3s with the option --k3s-server-arg . - mount ( --volume ) the calico descriptor in the auto deploy manifest directory of k3s /var/lib/rancher/k3s/server/manifests/ So the command of the cluster creation is (when you are at root of the k3d repository) k3d cluster create \" ${ clustername } \" --k3s-server-arg '--flannel-backend=none' --volume \" $( pwd ) /docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml\" In this example : - change \"${clustername}\" with the name of the cluster (or set a variable). - $(pwd)/docs/usage/guides/calico.yaml is the absolute path of the calico manifest, you can adapt it. You can add other options, see . The cluster will start without flannel and with Calico as CNI Plugin. For watching for the pod(s) deployment watch \"kubectl get pods -n kube-system\" You will have something like this at beginning (with the command line kubectl get pods -n kube-system ) NAME READY STATUS RESTARTS AGE helm-install-traefik-pn84f 0/1 Pending 0 3s calico-node-97rx8 0/1 Init:0/3 0 3s metrics-server-7566d596c8-hwnqq 0/1 Pending 0 2s calico-kube-controllers-58b656d69f-2z7cn 0/1 Pending 0 2s local-path-provisioner-6d59f47c7-rmswg 0/1 Pending 0 2s coredns-8655855d6-cxtnr 0/1 Pending 0 2s And when it finish to start NAME READY STATUS RESTARTS AGE metrics-server-7566d596c8-hwnqq 1/1 Running 0 56s calico-node-97rx8 1/1 Running 0 57s helm-install-traefik-pn84f 0/1 Completed 1 57s svclb-traefik-lmjr5 2/2 Running 0 28s calico-kube-controllers-58b656d69f-2z7cn 1/1 Running 0 56s local-path-provisioner-6d59f47c7-rmswg 1/1 Running 0 56s traefik-758cd5fc85-x8p57 1/1 Running 0 28s coredns-8655855d6-cxtnr 1/1 Running 0 56s Note : - you can use the auto deploy manifest or a kubectl apply depending on your needs - Calico is not as quick as Flannel (but it provides more features)","title":"2. Create the cluster without flannel and with calico"},{"location":"usage/guides/calico/#references","text":"https://rancher.com/docs/k3s/latest/en/installation/network-options/ https://docs.projectcalico.org/getting-started/kubernetes/k3s/","title":"References"},{"location":"usage/guides/cuda/","text":"Running CUDA workloads \u00b6 If you want to run CUDA workloads on the K3S container you need to customize the container. CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime. The K3S container itself also needs to run with this runtime. If you are using Docker you can install the NVIDIA Container Toolkit . Building a customized K3S image \u00b6 To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image. Adapt the Dockerfile \u00b6 FROM ubuntu:18.04 as base RUN apt-get update -y && apt-get install -y ca-certificates ADD k3s/build/out/data.tar.gz /image RUN mkdir -p /image/etc/ssl/certs /image/run /image/var/run /image/tmp /image/lib/modules /image/lib/firmware && \\ cp /etc/ssl/certs/ca-certificates.crt /image/etc/ssl/certs/ca-certificates.crt RUN cd image/bin && \\ rm -f k3s && \\ ln -s k3s-server k3s FROM ubuntu:18.04 RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get update -y && apt-get -y install gnupg2 curl # Install the NVIDIA CUDA drivers and Container Runtime RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub RUN sh -c 'echo \"deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /\" > /etc/apt/sources.list.d/cuda.list' RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add - RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list RUN apt-get update -y RUN apt-get -y install cuda-drivers nvidia-container-runtime COPY --from = base /image / RUN mkdir -p /etc && \\ echo 'hosts: files dns' > /etc/nsswitch.conf RUN chmod 1777 /tmp # Provide custom containerd configuration to configure the nvidia-container-runtime RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/ COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl # Deploy the nvidia driver plugin on startup RUN mkdir -p /var/lib/rancher/k3s/server/manifests COPY gpu.yaml /var/lib/rancher/k3s/server/manifests/gpu.yaml VOLUME /var/lib/kubelet VOLUME /var/lib/rancher/k3s VOLUME /var/lib/cni VOLUME /var/log ENV PATH = \" $PATH :/bin/aux\" ENTRYPOINT [ \"/bin/k3s\" ] CMD [ \"agent\" ] This Dockerfile is based on the K3S Dockerfile . The following changes are applied: 1. Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed 2. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime 3. Add a manifest for the NVIDIA driver plugin for Kubernetes Configure containerd \u00b6 We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a config.toml.tmpl file. More information can be found on the K3S site . [ plugins . opt ] path = \"{{ .NodeConfig.Containerd.Opt }}\" [ plugins . cri ] stream_server_address = \"127.0.0.1\" stream_server_port = \"10010\" {{ - if . IsRunningInUserNS }} disable_cgroup = true disable_apparmor = true restrict_oom_score_adj = true {{ end }} {{ - if . NodeConfig . AgentConfig . PauseImage }} sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\" {{ end }} {{ - if not . NodeConfig . NoFlannel }} [ plugins . cri . cni ] bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\" conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\" {{ end }} [ plugins . cri . containerd . runtimes . runc ] # ---- changed from ' io . containerd . runc . v2 ' for GPU support runtime_type = \"io.containerd.runtime.v1.linux\" # ---- added for GPU support [ plugins . linux ] runtime = \"nvidia-container-runtime\" {{ if . PrivateRegistryConfig }} {{ if . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors ]{{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors . \"{{$k}}\" ] endpoint = [{{ range $ i , $ j := $ v . Endpoints }}{{ if $ i }}, {{ end }}{{ printf \"%q\" .}}{{ end }}] {{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Configs }} {{ if $ v . Auth }} [ plugins . cri . registry . configs . \"{{$k}}\" . auth ] {{ if $ v . Auth . Username }} username = \"{{ $v.Auth.Username }}\" {{ end }} {{ if $ v . Auth . Password }} password = \"{{ $v.Auth.Password }}\" {{ end }} {{ if $ v . Auth . Auth }} auth = \"{{ $v.Auth.Auth }}\" {{ end }} {{ if $ v . Auth . IdentityToken }} identitytoken = \"{{ $v.Auth.IdentityToken }}\" {{ end }} {{ end }} {{ if $ v . TLS }} [ plugins . cri . registry . configs . \"{{$k}}\" . tls ] {{ if $ v . TLS . CAFile }} ca_file = \"{{ $v.TLS.CAFile }}\" {{ end }} {{ if $ v . TLS . CertFile }} cert_file = \"{{ $v.TLS.CertFile }}\" {{ end }} {{ if $ v . TLS . KeyFile }} key_file = \"{{ $v.TLS.KeyFile }}\" {{ end }} {{ end }} {{ end }} {{ end }} The NVIDIA device plugin \u00b6 To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin . The device plugin is a daemonset and allows you to automatically: * Expose the number of GPUs on each nodes of your cluster * Keep track of the health of your GPUs * Run GPU enabled containers in your Kubernetes cluster. apiVersion : apps/v1 kind : DaemonSet metadata : name : nvidia-device-plugin-daemonset namespace : kube-system spec : selector : matchLabels : name : nvidia-device-plugin-ds template : metadata : # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler # reserves resources for critical add-on pods so that they can be rescheduled after # a failure. This annotation works in tandem with the toleration below. annotations : scheduler.alpha.kubernetes.io/critical-pod : \"\" labels : name : nvidia-device-plugin-ds spec : tolerations : # Allow this pod to be rescheduled while the node is in \"critical add-ons only\" mode. # This, along with the annotation above marks this pod as a critical add-on. - key : CriticalAddonsOnly operator : Exists containers : - env : - name : DP_DISABLE_HEALTHCHECKS value : xids image : nvidia/k8s-device-plugin:1.11 name : nvidia-device-plugin-ctr securityContext : allowPrivilegeEscalation : true capabilities : drop : [ \"ALL\" ] volumeMounts : - name : device-plugin mountPath : /var/lib/kubelet/device-plugins volumes : - name : device-plugin hostPath : path : /var/lib/kubelet/device-plugins Build the K3S image \u00b6 To build the custom image we need to build K3S because we need the generated output. Put the following files in a directory: * Dockerfile * config.toml.tmpl * gpu.yaml * build.sh * cuda-vector-add.yaml The build.sh files takes the K3S git tag as argument, it defaults to v1.18.10+k3s1 . The script performs the following steps: * pulls K3S * builds K3S * build the custom K3S Docker image The resulting image is tagged as k3s-gpu:. The version tag is the git tag but the \u2018+\u2019 sign is replaced with a \u2018-\u2018. build.sh : #!/bin/bash set -e cd $( dirname $0 ) K3S_TAG = \" ${ 1 :- v1 .18.10+k3s1 } \" IMAGE_TAG = \" ${ K3S_TAG /+/- } \" if [ -d k3s ] ; then rm -rf k3s fi git clone --depth 1 https://github.com/rancher/k3s.git -b $K3S_TAG cd k3s make cd .. docker build -t k3s-gpu: $IMAGE_TAG . Run and test the custom image with Docker \u00b6 You can run a container based on the new image with Docker: docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1 Deploy a test pod : docker cp cuda-vector-add.yaml k3s-gpu:/cuda-vector-add.yaml docker exec k3s-gpu kubectl apply -f /cuda-vector-add.yaml docker exec k3s-gpu kubectl logs cuda-vector-add Run and test the custom image with k3d \u00b6 Tou can use the image with k3d: k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all Deploy a test pod : kubectl apply -f cuda-vector-add.yaml kubectl logs cuda-vector-add Known issues \u00b6 This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide . Acknowledgements: \u00b6 Most of the information in this article was obtained from various sources: * Add NVIDIA GPU support to k3s with containerd * microk8s * K3S","title":"Running CUDA workloads"},{"location":"usage/guides/cuda/#running-cuda-workloads","text":"If you want to run CUDA workloads on the K3S container you need to customize the container. CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime. The K3S container itself also needs to run with this runtime. If you are using Docker you can install the NVIDIA Container Toolkit .","title":"Running CUDA workloads"},{"location":"usage/guides/cuda/#building-a-customized-k3s-image","text":"To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.","title":"Building a customized K3S image"},{"location":"usage/guides/cuda/#adapt-the-dockerfile","text":"FROM ubuntu:18.04 as base RUN apt-get update -y && apt-get install -y ca-certificates ADD k3s/build/out/data.tar.gz /image RUN mkdir -p /image/etc/ssl/certs /image/run /image/var/run /image/tmp /image/lib/modules /image/lib/firmware && \\ cp /etc/ssl/certs/ca-certificates.crt /image/etc/ssl/certs/ca-certificates.crt RUN cd image/bin && \\ rm -f k3s && \\ ln -s k3s-server k3s FROM ubuntu:18.04 RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get update -y && apt-get -y install gnupg2 curl # Install the NVIDIA CUDA drivers and Container Runtime RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub RUN sh -c 'echo \"deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /\" > /etc/apt/sources.list.d/cuda.list' RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add - RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list RUN apt-get update -y RUN apt-get -y install cuda-drivers nvidia-container-runtime COPY --from = base /image / RUN mkdir -p /etc && \\ echo 'hosts: files dns' > /etc/nsswitch.conf RUN chmod 1777 /tmp # Provide custom containerd configuration to configure the nvidia-container-runtime RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/ COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl # Deploy the nvidia driver plugin on startup RUN mkdir -p /var/lib/rancher/k3s/server/manifests COPY gpu.yaml /var/lib/rancher/k3s/server/manifests/gpu.yaml VOLUME /var/lib/kubelet VOLUME /var/lib/rancher/k3s VOLUME /var/lib/cni VOLUME /var/log ENV PATH = \" $PATH :/bin/aux\" ENTRYPOINT [ \"/bin/k3s\" ] CMD [ \"agent\" ] This Dockerfile is based on the K3S Dockerfile . The following changes are applied: 1. Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed 2. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime 3. Add a manifest for the NVIDIA driver plugin for Kubernetes","title":"Adapt the Dockerfile"},{"location":"usage/guides/cuda/#configure-containerd","text":"We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a config.toml.tmpl file. More information can be found on the K3S site . [ plugins . opt ] path = \"{{ .NodeConfig.Containerd.Opt }}\" [ plugins . cri ] stream_server_address = \"127.0.0.1\" stream_server_port = \"10010\" {{ - if . IsRunningInUserNS }} disable_cgroup = true disable_apparmor = true restrict_oom_score_adj = true {{ end }} {{ - if . NodeConfig . AgentConfig . PauseImage }} sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\" {{ end }} {{ - if not . NodeConfig . NoFlannel }} [ plugins . cri . cni ] bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\" conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\" {{ end }} [ plugins . cri . containerd . runtimes . runc ] # ---- changed from ' io . containerd . runc . v2 ' for GPU support runtime_type = \"io.containerd.runtime.v1.linux\" # ---- added for GPU support [ plugins . linux ] runtime = \"nvidia-container-runtime\" {{ if . PrivateRegistryConfig }} {{ if . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors ]{{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors . \"{{$k}}\" ] endpoint = [{{ range $ i , $ j := $ v . Endpoints }}{{ if $ i }}, {{ end }}{{ printf \"%q\" .}}{{ end }}] {{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Configs }} {{ if $ v . Auth }} [ plugins . cri . registry . configs . \"{{$k}}\" . auth ] {{ if $ v . Auth . Username }} username = \"{{ $v.Auth.Username }}\" {{ end }} {{ if $ v . Auth . Password }} password = \"{{ $v.Auth.Password }}\" {{ end }} {{ if $ v . Auth . Auth }} auth = \"{{ $v.Auth.Auth }}\" {{ end }} {{ if $ v . Auth . IdentityToken }} identitytoken = \"{{ $v.Auth.IdentityToken }}\" {{ end }} {{ end }} {{ if $ v . TLS }} [ plugins . cri . registry . configs . \"{{$k}}\" . tls ] {{ if $ v . TLS . CAFile }} ca_file = \"{{ $v.TLS.CAFile }}\" {{ end }} {{ if $ v . TLS . CertFile }} cert_file = \"{{ $v.TLS.CertFile }}\" {{ end }} {{ if $ v . TLS . KeyFile }} key_file = \"{{ $v.TLS.KeyFile }}\" {{ end }} {{ end }} {{ end }} {{ end }}","title":"Configure containerd"},{"location":"usage/guides/cuda/#the-nvidia-device-plugin","text":"To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin . The device plugin is a daemonset and allows you to automatically: * Expose the number of GPUs on each nodes of your cluster * Keep track of the health of your GPUs * Run GPU enabled containers in your Kubernetes cluster. apiVersion : apps/v1 kind : DaemonSet metadata : name : nvidia-device-plugin-daemonset namespace : kube-system spec : selector : matchLabels : name : nvidia-device-plugin-ds template : metadata : # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler # reserves resources for critical add-on pods so that they can be rescheduled after # a failure. This annotation works in tandem with the toleration below. annotations : scheduler.alpha.kubernetes.io/critical-pod : \"\" labels : name : nvidia-device-plugin-ds spec : tolerations : # Allow this pod to be rescheduled while the node is in \"critical add-ons only\" mode. # This, along with the annotation above marks this pod as a critical add-on. - key : CriticalAddonsOnly operator : Exists containers : - env : - name : DP_DISABLE_HEALTHCHECKS value : xids image : nvidia/k8s-device-plugin:1.11 name : nvidia-device-plugin-ctr securityContext : allowPrivilegeEscalation : true capabilities : drop : [ \"ALL\" ] volumeMounts : - name : device-plugin mountPath : /var/lib/kubelet/device-plugins volumes : - name : device-plugin hostPath : path : /var/lib/kubelet/device-plugins","title":"The NVIDIA device plugin"},{"location":"usage/guides/cuda/#build-the-k3s-image","text":"To build the custom image we need to build K3S because we need the generated output. Put the following files in a directory: * Dockerfile * config.toml.tmpl * gpu.yaml * build.sh * cuda-vector-add.yaml The build.sh files takes the K3S git tag as argument, it defaults to v1.18.10+k3s1 . The script performs the following steps: * pulls K3S * builds K3S * build the custom K3S Docker image The resulting image is tagged as k3s-gpu:. The version tag is the git tag but the \u2018+\u2019 sign is replaced with a \u2018-\u2018. build.sh : #!/bin/bash set -e cd $( dirname $0 ) K3S_TAG = \" ${ 1 :- v1 .18.10+k3s1 } \" IMAGE_TAG = \" ${ K3S_TAG /+/- } \" if [ -d k3s ] ; then rm -rf k3s fi git clone --depth 1 https://github.com/rancher/k3s.git -b $K3S_TAG cd k3s make cd .. docker build -t k3s-gpu: $IMAGE_TAG .","title":"Build the K3S image"},{"location":"usage/guides/cuda/#run-and-test-the-custom-image-with-docker","text":"You can run a container based on the new image with Docker: docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1 Deploy a test pod : docker cp cuda-vector-add.yaml k3s-gpu:/cuda-vector-add.yaml docker exec k3s-gpu kubectl apply -f /cuda-vector-add.yaml docker exec k3s-gpu kubectl logs cuda-vector-add","title":"Run and test the custom image with Docker"},{"location":"usage/guides/cuda/#run-and-test-the-custom-image-with-k3d","text":"Tou can use the image with k3d: k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all Deploy a test pod : kubectl apply -f cuda-vector-add.yaml kubectl logs cuda-vector-add","title":"Run and test the custom image with k3d"},{"location":"usage/guides/cuda/#known-issues","text":"This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide .","title":"Known issues"},{"location":"usage/guides/cuda/#acknowledgements","text":"Most of the information in this article was obtained from various sources: * Add NVIDIA GPU support to k3s with containerd * microk8s * K3S","title":"Acknowledgements:"},{"location":"usage/guides/exposing_services/","text":"Exposing Services \u00b6 1. via Ingress \u00b6 In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p \"8081:80@loadbalancer\" --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d kubeconfig write k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller # apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / pathType : Prefix backend : service : name : nginx port : number : 80 Curl it via localhost curl localhost:8081/ 2. via NodePort \u00b6 Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p \"8082:30080@agent[0]\" --agents 2 Note : Kubernetes\u2019 default NodePort range is 30000-32767 Note : You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p \"30000-32767:30000-32767@server[0]\" (See this video from @portainer ) \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#exposing-services","text":"","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#1-via-ingress","text":"In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p \"8081:80@loadbalancer\" --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d kubeconfig write k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller # apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / pathType : Prefix backend : service : name : nginx port : number : 80 Curl it via localhost curl localhost:8081/","title":"1. via Ingress"},{"location":"usage/guides/exposing_services/#2-via-nodeport","text":"Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p \"8082:30080@agent[0]\" --agents 2 Note : Kubernetes\u2019 default NodePort range is 30000-32767 Note : You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p \"30000-32767:30000-32767@server[0]\" (See this video from @portainer ) \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"2. via NodePort"},{"location":"usage/guides/registries/","text":"Registries \u00b6 Registries configuration file \u00b6 You can add registries by specifying them in a registries.yaml and mounting them at creation time: k3d cluster create mycluster --volume \"/home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" . This file is a regular k3s registries configuration file , and looks like this: mirrors : \"my.company.registry:5000\" : endpoint : - http://my.company.registry:5000 In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000 . Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates . Authenticated registries \u00b6 When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra Secure registries \u00b6 When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume \" ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" --volume \" ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem\" Using a local registry \u00b6 Using the k3d registry \u00b6 Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x Using your own local registry \u00b6 You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry . Pushing to your local registry address \u00b6 As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1) Testing your registry \u00b6 You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat <= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates .","title":"Registries configuration file"},{"location":"usage/guides/registries/#authenticated-registries","text":"When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra","title":"Authenticated registries"},{"location":"usage/guides/registries/#secure-registries","text":"When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume \" ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" --volume \" ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem\"","title":"Secure registries"},{"location":"usage/guides/registries/#using-a-local-registry","text":"","title":"Using a local registry"},{"location":"usage/guides/registries/#using-the-k3d-registry","text":"Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x","title":"Using the k3d registry"},{"location":"usage/guides/registries/#using-your-own-local-registry","text":"You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry .","title":"Using your own local registry"},{"location":"usage/guides/registries/#pushing-to-your-local-registry-address","text":"As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1)","title":"Pushing to your local registry address"},{"location":"usage/guides/registries/#testing-your-registry","text":"You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat < with = latest or 3.x.x for a specific version (maintained by spencergilbert/asdf-k3d ) use Chocolatey : choco install k3d (Chocolatey package manager is available for Windows) package source can be found in erwinkersten/chocolatey-packages Quick Start \u00b6 Create a cluster named mycluster with just a single server node: k3d cluster create mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d kubeconfig merge mycluster --kubeconfig-switch-context Use the new cluster with kubectl , e.g.: kubectl get nodes Related Projects \u00b6 k3x : a graphics interface (for Linux) to k3d.","title":"Overview"},{"location":"#overview","text":"This page is targeting k3d v4.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo","title":"Overview"},{"location":"#learning","text":"Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo","title":"Learning"},{"location":"#requirements","text":"docker","title":"Requirements"},{"location":"#releases","text":"Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - - Chocolatey stable -","title":"Releases"},{"location":"#installation","text":"You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v4.0.0 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v4.0.0 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes) use arkade : arkade get k3d use asdf : asdf plugin-add k3d , then asdf install k3d with = latest or 3.x.x for a specific version (maintained by spencergilbert/asdf-k3d ) use Chocolatey : choco install k3d (Chocolatey package manager is available for Windows) package source can be found in erwinkersten/chocolatey-packages","title":"Installation"},{"location":"#quick-start","text":"Create a cluster named mycluster with just a single server node: k3d cluster create mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d kubeconfig merge mycluster --kubeconfig-switch-context Use the new cluster with kubectl , e.g.: kubectl get nodes","title":"Quick Start"},{"location":"#related-projects","text":"k3x : a graphics interface (for Linux) to k3d.","title":"Related Projects"},{"location":"faq/faq/","text":"FAQ / Nice to know \u00b6 Issues with BTRFS \u00b6 As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper Issues with ZFS \u00b6 k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3 ) fails, because the initializing server node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374 Pods evicted due to lack of disk space \u00b6 Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%' Restarting a multi-server cluster or the initializing server node fails \u00b6 What you do: You create a cluster with more than one server node and later, you either stop server-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing server node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262 Passing additional arguments/flags to k3s (and on to e.g. the kube-apiserver) \u00b6 The Problem: Passing a feature flag to the Kubernetes API Server running inside k3s. Example: you want to enable the EphemeralContainers feature flag in Kubernetes Solution: k3d cluster create --k3s-server-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true' Note: Be aware of where the flags require dashes ( -- ) and where not. the k3s flag ( --kube-apiserver-arg ) has the dashes the kube-apiserver flag feature-gates doesn\u2019t have them (k3s adds them internally) Second example: k3d cluster create k3d-one --k3s-server-arg --cluster-cidr = \"10.118.0.0/17\" --k3s-server-arg --service-cidr = \"10.118.128.0/17\" --k3s-server-arg --disable = servicelb --k3s-server-arg --disable = traefik --verbose Note: There are many ways to use the \" and ' quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands How to access services (like a database) running on my Docker Host Machine \u00b6 As of version v3.1.0, we\u2019re injecting the host.k3d.internal entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as host.k3d.internal","title":"FAQ / Nice to know"},{"location":"faq/faq/#faq-nice-to-know","text":"","title":"FAQ / Nice to know"},{"location":"faq/faq/#issues-with-btrfs","text":"As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper","title":"Issues with BTRFS"},{"location":"faq/faq/#issues-with-zfs","text":"k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3 ) fails, because the initializing server node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374","title":"Issues with ZFS"},{"location":"faq/faq/#pods-evicted-due-to-lack-of-disk-space","text":"Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'","title":"Pods evicted due to lack of disk space"},{"location":"faq/faq/#restarting-a-multi-server-cluster-or-the-initializing-server-node-fails","text":"What you do: You create a cluster with more than one server node and later, you either stop server-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing server node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262","title":"Restarting a multi-server cluster or the initializing server node fails"},{"location":"faq/faq/#passing-additional-argumentsflags-to-k3s-and-on-to-eg-the-kube-apiserver","text":"The Problem: Passing a feature flag to the Kubernetes API Server running inside k3s. Example: you want to enable the EphemeralContainers feature flag in Kubernetes Solution: k3d cluster create --k3s-server-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true' Note: Be aware of where the flags require dashes ( -- ) and where not. the k3s flag ( --kube-apiserver-arg ) has the dashes the kube-apiserver flag feature-gates doesn\u2019t have them (k3s adds them internally) Second example: k3d cluster create k3d-one --k3s-server-arg --cluster-cidr = \"10.118.0.0/17\" --k3s-server-arg --service-cidr = \"10.118.128.0/17\" --k3s-server-arg --disable = servicelb --k3s-server-arg --disable = traefik --verbose Note: There are many ways to use the \" and ' quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands","title":"Passing additional arguments/flags to k3s (and on to e.g. the kube-apiserver)"},{"location":"faq/faq/#how-to-access-services-like-a-database-running-on-my-docker-host-machine","text":"As of version v3.1.0, we\u2019re injecting the host.k3d.internal entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as host.k3d.internal","title":"How to access services (like a database) running on my Docker Host Machine"},{"location":"faq/v1vsv3-comparison/","text":"Feature Comparison: v1 vs. v3 \u00b6 v1.x feature -> implementation in v3 \u00b6 - k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> coming in v4.0.0 (2021) as `--registry-create` and `--registry-use` - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#feature-comparison-v1-vs-v3","text":"","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#v1x-feature-implementation-in-v3","text":"- k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> coming in v4.0.0 (2021) as `--registry-create` and `--registry-use` - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"v1.x feature -> implementation in v3"},{"location":"internals/defaults/","text":"Defaults \u00b6 multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we expose the API-Port ( 6443 ) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s) port 6443 of the loadbalancer is then mapped to a specific ( --api-port flag) or a random (default) port on the host system kubeconfig if --kubeconfig-update-default is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config ) Networking by default, k3d creates a new (docker) network for every cluster","title":"Defaults"},{"location":"internals/defaults/#defaults","text":"multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we expose the API-Port ( 6443 ) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s) port 6443 of the loadbalancer is then mapped to a specific ( --api-port flag) or a random (default) port on the host system kubeconfig if --kubeconfig-update-default is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config ) Networking by default, k3d creates a new (docker) network for every cluster","title":"Defaults"},{"location":"internals/networking/","text":"Networking \u00b6 Related issues: rancher/k3d #220 Introduction \u00b6 By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle. Connecting to docker \u201cinternal\u201d/pre-defined networks \u00b6 host network \u00b6 When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node. bridge network \u00b6 By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though. none \u201cnetwork\u201d \u00b6 Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"Networking"},{"location":"internals/networking/#networking","text":"Related issues: rancher/k3d #220","title":"Networking"},{"location":"internals/networking/#introduction","text":"By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.","title":"Introduction"},{"location":"internals/networking/#connecting-to-docker-internalpre-defined-networks","text":"","title":"Connecting to docker \"internal\"/pre-defined networks"},{"location":"internals/networking/#host-network","text":"When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node.","title":"host network"},{"location":"internals/networking/#bridge-network","text":"By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.","title":"bridge network"},{"location":"internals/networking/#none-network","text":"Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"none \"network\""},{"location":"usage/commands/","text":"Command Tree \u00b6 k3d --verbose # GLOBAL: enable verbose (debug) logging (default: false) --trace # GLOBAL: enable super verbose logging (trace logging) (default: false) --version # show k3d and k3s version -h, --help # GLOBAL: show help text cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create -a, --agents # specify how many agent nodes you want to create (integer, default: 0) --api-port # specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random) -c, --config # use a config file (format 'PATH') -e, --env # add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times) --gpus # [from docker CLI] add GPU devices to the node containers (string, e.g. 'all') -i, --image # specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build) --k3s-agent-arg # add additional arguments to the k3s agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) --kubeconfig-switch-context # (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true) --kubeconfig-update-default # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true) -l, --label # add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times) --network # specify an existing (docker) network you want to connect to (string) --no-hostip # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false) --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false) --no-lb # disable the creation of a load balancer in front of the server nodes (default: false) --no-rollback # disable the automatic rollback actions, if anything goes wrong (default: false) -p, --port # add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times) --registry-create # create a new (docker) registry dedicated for this cluster (default: false) --registry-use # use an existing local (docker) registry with this cluster (string, use multiple times) -s, --servers # specify how many server nodes you want to create (integer, default: 1) --token # specify a cluster token (string, default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s') -v, --volume # specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times) --wait # enable waiting for all server nodes to be ready before returning (default: true) start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters (default: false) --wait # wait for all servers and server-loadbalancer to be up before returning (default: true) --timeout # maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s') stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters (default: false) delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters (default: false) list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers (default: false) --token # show column with cluster tokens (default: false) -o, --output # format the output (format: 'json|yaml') completion [ bash | zsh | fish | ( psh | powershell )] # generate completion scripts for common shells config init # write a default k3d config (as a starting point) -f, --force # force overwrite target file (default: false) -o, --output # file to write to (string, default \"k3d-default.yaml\") help [ COMMAND ] # show help text for any command image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into (string, use flag multiple times, default: k3s-default) -k, --keep-tarball # do not delete the image tarball from the shared volume after completion (default: false) kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and write it to stdout -a, --all # get kubeconfigs from all clusters (default: false) merge | write ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file -a, --all # get kubeconfigs from all clusters (default: false) -s, --kubeconfig-switch-context # switch current-context in kubeconfig to the new context (default: true) -d, --kubeconfig-merge-default # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) -o, --output # specify the output file where the kubeconfig should be written to (string) --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents (default: false) -u, --update # update conflicting fields in existing kubeconfig (default: true) node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to (string, default: k3s-default) -i, --image # specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build) --replicas # specify how many replicas you want to create with this spec (integer, default: 1) --role # specify the node role (string, format: 'agent|server', default: agent) --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s') --wait # wait for the node to be up and running before returning (default: true) start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes (default: false) list NODENAME --no-headers # do not print headers (default: false) registry create REGISTRYNAME -i, --image # specify image used for the registry (string, default: \"docker.io/library/registry:2\") -p, --port # select host port to map to (format: '[HOST:]HOSTPORT', default: 'random') delete REGISTRYNAME -a, --all # delete all existing registries (default: false) list [ NAME [ NAME... ]] --no-headers # disable table headers (default: false) version # show k3d and k3s version","title":"Command Tree"},{"location":"usage/commands/#command-tree","text":"k3d --verbose # GLOBAL: enable verbose (debug) logging (default: false) --trace # GLOBAL: enable super verbose logging (trace logging) (default: false) --version # show k3d and k3s version -h, --help # GLOBAL: show help text cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create -a, --agents # specify how many agent nodes you want to create (integer, default: 0) --api-port # specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random) -c, --config # use a config file (format 'PATH') -e, --env # add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times) --gpus # [from docker CLI] add GPU devices to the node containers (string, e.g. 'all') -i, --image # specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build) --k3s-agent-arg # add additional arguments to the k3s agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) --kubeconfig-switch-context # (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true) --kubeconfig-update-default # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true) -l, --label # add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times) --network # specify an existing (docker) network you want to connect to (string) --no-hostip # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false) --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false) --no-lb # disable the creation of a load balancer in front of the server nodes (default: false) --no-rollback # disable the automatic rollback actions, if anything goes wrong (default: false) -p, --port # add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times) --registry-create # create a new (docker) registry dedicated for this cluster (default: false) --registry-use # use an existing local (docker) registry with this cluster (string, use multiple times) -s, --servers # specify how many server nodes you want to create (integer, default: 1) --token # specify a cluster token (string, default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s') -v, --volume # specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times) --wait # enable waiting for all server nodes to be ready before returning (default: true) start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters (default: false) --wait # wait for all servers and server-loadbalancer to be up before returning (default: true) --timeout # maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s') stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters (default: false) delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters (default: false) list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers (default: false) --token # show column with cluster tokens (default: false) -o, --output # format the output (format: 'json|yaml') completion [ bash | zsh | fish | ( psh | powershell )] # generate completion scripts for common shells config init # write a default k3d config (as a starting point) -f, --force # force overwrite target file (default: false) -o, --output # file to write to (string, default \"k3d-default.yaml\") help [ COMMAND ] # show help text for any command image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into (string, use flag multiple times, default: k3s-default) -k, --keep-tarball # do not delete the image tarball from the shared volume after completion (default: false) kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and write it to stdout -a, --all # get kubeconfigs from all clusters (default: false) merge | write ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file -a, --all # get kubeconfigs from all clusters (default: false) -s, --kubeconfig-switch-context # switch current-context in kubeconfig to the new context (default: true) -d, --kubeconfig-merge-default # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) -o, --output # specify the output file where the kubeconfig should be written to (string) --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents (default: false) -u, --update # update conflicting fields in existing kubeconfig (default: true) node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to (string, default: k3s-default) -i, --image # specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build) --replicas # specify how many replicas you want to create with this spec (integer, default: 1) --role # specify the node role (string, format: 'agent|server', default: agent) --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s') --wait # wait for the node to be up and running before returning (default: true) start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes (default: false) list NODENAME --no-headers # do not print headers (default: false) registry create REGISTRYNAME -i, --image # specify image used for the registry (string, default: \"docker.io/library/registry:2\") -p, --port # select host port to map to (format: '[HOST:]HOSTPORT', default: 'random') delete REGISTRYNAME -a, --all # delete all existing registries (default: false) list [ NAME [ NAME... ]] --no-headers # disable table headers (default: false) version # show k3d and k3s version","title":"Command Tree"},{"location":"usage/kubeconfig/","text":"Handling Kubeconfigs \u00b6 By default, k3d will update your default kubeconfig with your new cluster\u2019s details and set the current-context to it (can be disabled). To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config ) Getting the kubeconfig for a newly created cluster \u00b6 Create a new kubeconfig file after cluster creation k3d kubeconfig write mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig write mycluster ) Note 2 : alternatively you can use k3d kubeconfig get mycluster > some-file.yaml Update your default kubeconfig upon cluster creation (DEFAULT) k3d cluster create mycluster --kubeconfig-update-default Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --kubeconfig-merge-default Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --kubeconfig-switch-context flag. Removing cluster details from the kubeconfig \u00b6 k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists. Handling multiple clusters \u00b6 k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --kubeconfig-merge-default or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --kubeconfig-switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#handling-kubeconfigs","text":"By default, k3d will update your default kubeconfig with your new cluster\u2019s details and set the current-context to it (can be disabled). To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config )","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","text":"Create a new kubeconfig file after cluster creation k3d kubeconfig write mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig write mycluster ) Note 2 : alternatively you can use k3d kubeconfig get mycluster > some-file.yaml Update your default kubeconfig upon cluster creation (DEFAULT) k3d cluster create mycluster --kubeconfig-update-default Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --kubeconfig-merge-default Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --kubeconfig-switch-context flag.","title":"Getting the kubeconfig for a newly created cluster"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","text":"k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.","title":"Removing cluster details from the kubeconfig"},{"location":"usage/kubeconfig/#handling-multiple-clusters","text":"k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --kubeconfig-merge-default or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --kubeconfig-switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling multiple clusters"},{"location":"usage/multiserver/","text":"Creating multi-server clusters \u00b6 Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes. Embedded dqlite \u00b6 Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3 Adding server nodes to a running cluster \u00b6 In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#creating-multi-server-clusters","text":"Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#embedded-dqlite","text":"Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3","title":"Embedded dqlite"},{"location":"usage/multiserver/#adding-server-nodes-to-a-running-cluster","text":"In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Adding server nodes to a running cluster"},{"location":"usage/guides/calico/","text":"Use Calico instead of Flannel \u00b6 If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel. 1. Download and modify the Calico descriptor \u00b6 You can following the documentation And then you have to change the ConfigMap calico-config . On the cni_network_config add the entry for allowing IP forwarding \"container_settings\" : { \"allow_ip_forwarding\" : true } Or you can directly use this calico.yaml manifest 2. Create the cluster without flannel and with calico \u00b6 On the k3s cluster creation : - add the flag --flannel-backend=none . For this, on k3d you need to forward this flag to k3s with the option --k3s-server-arg . - mount ( --volume ) the calico descriptor in the auto deploy manifest directory of k3s /var/lib/rancher/k3s/server/manifests/ So the command of the cluster creation is (when you are at root of the k3d repository) k3d cluster create \" ${ clustername } \" --k3s-server-arg '--flannel-backend=none' --volume \" $( pwd ) /docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml\" In this example : - change \"${clustername}\" with the name of the cluster (or set a variable). - $(pwd)/docs/usage/guides/calico.yaml is the absolute path of the calico manifest, you can adapt it. You can add other options, see . The cluster will start without flannel and with Calico as CNI Plugin. For watching for the pod(s) deployment watch \"kubectl get pods -n kube-system\" You will have something like this at beginning (with the command line kubectl get pods -n kube-system ) NAME READY STATUS RESTARTS AGE helm-install-traefik-pn84f 0/1 Pending 0 3s calico-node-97rx8 0/1 Init:0/3 0 3s metrics-server-7566d596c8-hwnqq 0/1 Pending 0 2s calico-kube-controllers-58b656d69f-2z7cn 0/1 Pending 0 2s local-path-provisioner-6d59f47c7-rmswg 0/1 Pending 0 2s coredns-8655855d6-cxtnr 0/1 Pending 0 2s And when it finish to start NAME READY STATUS RESTARTS AGE metrics-server-7566d596c8-hwnqq 1/1 Running 0 56s calico-node-97rx8 1/1 Running 0 57s helm-install-traefik-pn84f 0/1 Completed 1 57s svclb-traefik-lmjr5 2/2 Running 0 28s calico-kube-controllers-58b656d69f-2z7cn 1/1 Running 0 56s local-path-provisioner-6d59f47c7-rmswg 1/1 Running 0 56s traefik-758cd5fc85-x8p57 1/1 Running 0 28s coredns-8655855d6-cxtnr 1/1 Running 0 56s Note : - you can use the auto deploy manifest or a kubectl apply depending on your needs - Calico is not as quick as Flannel (but it provides more features) References \u00b6 https://rancher.com/docs/k3s/latest/en/installation/network-options/ https://docs.projectcalico.org/getting-started/kubernetes/k3s/","title":"Use Calico instead of Flannel"},{"location":"usage/guides/calico/#use-calico-instead-of-flannel","text":"If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel.","title":"Use Calico instead of Flannel"},{"location":"usage/guides/calico/#1-download-and-modify-the-calico-descriptor","text":"You can following the documentation And then you have to change the ConfigMap calico-config . On the cni_network_config add the entry for allowing IP forwarding \"container_settings\" : { \"allow_ip_forwarding\" : true } Or you can directly use this calico.yaml manifest","title":"1. Download and modify the Calico descriptor"},{"location":"usage/guides/calico/#2-create-the-cluster-without-flannel-and-with-calico","text":"On the k3s cluster creation : - add the flag --flannel-backend=none . For this, on k3d you need to forward this flag to k3s with the option --k3s-server-arg . - mount ( --volume ) the calico descriptor in the auto deploy manifest directory of k3s /var/lib/rancher/k3s/server/manifests/ So the command of the cluster creation is (when you are at root of the k3d repository) k3d cluster create \" ${ clustername } \" --k3s-server-arg '--flannel-backend=none' --volume \" $( pwd ) /docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml\" In this example : - change \"${clustername}\" with the name of the cluster (or set a variable). - $(pwd)/docs/usage/guides/calico.yaml is the absolute path of the calico manifest, you can adapt it. You can add other options, see . The cluster will start without flannel and with Calico as CNI Plugin. For watching for the pod(s) deployment watch \"kubectl get pods -n kube-system\" You will have something like this at beginning (with the command line kubectl get pods -n kube-system ) NAME READY STATUS RESTARTS AGE helm-install-traefik-pn84f 0/1 Pending 0 3s calico-node-97rx8 0/1 Init:0/3 0 3s metrics-server-7566d596c8-hwnqq 0/1 Pending 0 2s calico-kube-controllers-58b656d69f-2z7cn 0/1 Pending 0 2s local-path-provisioner-6d59f47c7-rmswg 0/1 Pending 0 2s coredns-8655855d6-cxtnr 0/1 Pending 0 2s And when it finish to start NAME READY STATUS RESTARTS AGE metrics-server-7566d596c8-hwnqq 1/1 Running 0 56s calico-node-97rx8 1/1 Running 0 57s helm-install-traefik-pn84f 0/1 Completed 1 57s svclb-traefik-lmjr5 2/2 Running 0 28s calico-kube-controllers-58b656d69f-2z7cn 1/1 Running 0 56s local-path-provisioner-6d59f47c7-rmswg 1/1 Running 0 56s traefik-758cd5fc85-x8p57 1/1 Running 0 28s coredns-8655855d6-cxtnr 1/1 Running 0 56s Note : - you can use the auto deploy manifest or a kubectl apply depending on your needs - Calico is not as quick as Flannel (but it provides more features)","title":"2. Create the cluster without flannel and with calico"},{"location":"usage/guides/calico/#references","text":"https://rancher.com/docs/k3s/latest/en/installation/network-options/ https://docs.projectcalico.org/getting-started/kubernetes/k3s/","title":"References"},{"location":"usage/guides/cuda/","text":"Running CUDA workloads \u00b6 If you want to run CUDA workloads on the K3S container you need to customize the container. CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime. The K3S container itself also needs to run with this runtime. If you are using Docker you can install the NVIDIA Container Toolkit . Building a customized K3S image \u00b6 To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image. Adapt the Dockerfile \u00b6 FROM ubuntu:18.04 as base RUN apt-get update -y && apt-get install -y ca-certificates ADD k3s/build/out/data.tar.gz /image RUN mkdir -p /image/etc/ssl/certs /image/run /image/var/run /image/tmp /image/lib/modules /image/lib/firmware && \\ cp /etc/ssl/certs/ca-certificates.crt /image/etc/ssl/certs/ca-certificates.crt RUN cd image/bin && \\ rm -f k3s && \\ ln -s k3s-server k3s FROM ubuntu:18.04 RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get update -y && apt-get -y install gnupg2 curl # Install the NVIDIA CUDA drivers and Container Runtime RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub RUN sh -c 'echo \"deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /\" > /etc/apt/sources.list.d/cuda.list' RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add - RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list RUN apt-get update -y RUN apt-get -y install cuda-drivers nvidia-container-runtime COPY --from = base /image / RUN mkdir -p /etc && \\ echo 'hosts: files dns' > /etc/nsswitch.conf RUN chmod 1777 /tmp # Provide custom containerd configuration to configure the nvidia-container-runtime RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/ COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl # Deploy the nvidia driver plugin on startup RUN mkdir -p /var/lib/rancher/k3s/server/manifests COPY gpu.yaml /var/lib/rancher/k3s/server/manifests/gpu.yaml VOLUME /var/lib/kubelet VOLUME /var/lib/rancher/k3s VOLUME /var/lib/cni VOLUME /var/log ENV PATH = \" $PATH :/bin/aux\" ENTRYPOINT [ \"/bin/k3s\" ] CMD [ \"agent\" ] This Dockerfile is based on the K3S Dockerfile . The following changes are applied: Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime Add a manifest for the NVIDIA driver plugin for Kubernetes Configure containerd \u00b6 We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a config.toml.tmpl file. More information can be found on the K3S site . [ plugins . opt ] path = \"{{ .NodeConfig.Containerd.Opt }}\" [ plugins . cri ] stream_server_address = \"127.0.0.1\" stream_server_port = \"10010\" {{ - if . IsRunningInUserNS }} disable_cgroup = true disable_apparmor = true restrict_oom_score_adj = true {{ end }} {{ - if . NodeConfig . AgentConfig . PauseImage }} sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\" {{ end }} {{ - if not . NodeConfig . NoFlannel }} [ plugins . cri . cni ] bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\" conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\" {{ end }} [ plugins . cri . containerd . runtimes . runc ] # ---- changed from ' io . containerd . runc . v2 ' for GPU support runtime_type = \"io.containerd.runtime.v1.linux\" # ---- added for GPU support [ plugins . linux ] runtime = \"nvidia-container-runtime\" {{ if . PrivateRegistryConfig }} {{ if . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors ]{{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors . \"{{$k}}\" ] endpoint = [{{ range $ i , $ j := $ v . Endpoints }}{{ if $ i }}, {{ end }}{{ printf \"%q\" .}}{{ end }}] {{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Configs }} {{ if $ v . Auth }} [ plugins . cri . registry . configs . \"{{$k}}\" . auth ] {{ if $ v . Auth . Username }} username = \"{{ $v.Auth.Username }}\" {{ end }} {{ if $ v . Auth . Password }} password = \"{{ $v.Auth.Password }}\" {{ end }} {{ if $ v . Auth . Auth }} auth = \"{{ $v.Auth.Auth }}\" {{ end }} {{ if $ v . Auth . IdentityToken }} identitytoken = \"{{ $v.Auth.IdentityToken }}\" {{ end }} {{ end }} {{ if $ v . TLS }} [ plugins . cri . registry . configs . \"{{$k}}\" . tls ] {{ if $ v . TLS . CAFile }} ca_file = \"{{ $v.TLS.CAFile }}\" {{ end }} {{ if $ v . TLS . CertFile }} cert_file = \"{{ $v.TLS.CertFile }}\" {{ end }} {{ if $ v . TLS . KeyFile }} key_file = \"{{ $v.TLS.KeyFile }}\" {{ end }} {{ end }} {{ end }} {{ end }} The NVIDIA device plugin \u00b6 To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin . The device plugin is a deamonset and allows you to automatically: Expose the number of GPUs on each nodes of your cluster Keep track of the health of your GPUs Run GPU enabled containers in your Kubernetes cluster. apiVersion : apps/v1 kind : DaemonSet metadata : name : nvidia-device-plugin-daemonset namespace : kube-system spec : selector : matchLabels : name : nvidia-device-plugin-ds template : metadata : # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler # reserves resources for critical add-on pods so that they can be rescheduled after # a failure. This annotation works in tandem with the toleration below. annotations : scheduler.alpha.kubernetes.io/critical-pod : \"\" labels : name : nvidia-device-plugin-ds spec : tolerations : # Allow this pod to be rescheduled while the node is in \"critical add-ons only\" mode. # This, along with the annotation above marks this pod as a critical add-on. - key : CriticalAddonsOnly operator : Exists containers : - env : - name : DP_DISABLE_HEALTHCHECKS value : xids image : nvidia/k8s-device-plugin:1.11 name : nvidia-device-plugin-ctr securityContext : allowPrivilegeEscalation : true capabilities : drop : [ \"ALL\" ] volumeMounts : - name : device-plugin mountPath : /var/lib/kubelet/device-plugins volumes : - name : device-plugin hostPath : path : /var/lib/kubelet/device-plugins Build the K3S image \u00b6 To build the custom image we need to build K3S because we need the generated output. Put the following files in a directory: Dockerfile config.toml.tmpl gpu.yaml build.sh cuda-vector-add.yaml The build.sh files takes the K3S git tag as argument, it defaults to v1.18.10+k3s1 . The script performs the following steps: pulls K3S builds K3S build the custom K3S Docker image The resulting image is tagged as k3s-gpu:. The version tag is the git tag but the \u2018+\u2019 sign is replaced with a \u2018-\u2018. build.sh : #!/bin/bash set -e cd $( dirname $0 ) K3S_TAG = \" ${ 1 :- v1 .18.10+k3s1 } \" IMAGE_TAG = \" ${ K3S_TAG /+/- } \" if [ -d k3s ] ; then rm -rf k3s fi git clone --depth 1 https://github.com/rancher/k3s.git -b $K3S_TAG cd k3s make cd .. docker build -t k3s-gpu: $IMAGE_TAG . Run and test the custom image with Docker \u00b6 You can run a container based on the new image with Docker: docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1 Deploy a test pod : docker cp cuda-vector-add.yaml k3s-gpu:/cuda-vector-add.yaml docker exec k3s-gpu kubectl apply -f /cuda-vector-add.yaml docker exec k3s-gpu kubectl logs cuda-vector-add Run and test the custom image with k3d \u00b6 Tou can use the image with k3d: k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all Deploy a test pod : kubectl apply -f cuda-vector-add.yaml kubectl logs cuda-vector-add Known issues \u00b6 This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide . Acknowledgements \u00b6 Most of the information in this article was obtained from various sources: Add NVIDIA GPU support to k3s with containerd microk8s K3S","title":"Running CUDA workloads"},{"location":"usage/guides/cuda/#running-cuda-workloads","text":"If you want to run CUDA workloads on the K3S container you need to customize the container. CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime. The K3S container itself also needs to run with this runtime. If you are using Docker you can install the NVIDIA Container Toolkit .","title":"Running CUDA workloads"},{"location":"usage/guides/cuda/#building-a-customized-k3s-image","text":"To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.","title":"Building a customized K3S image"},{"location":"usage/guides/cuda/#adapt-the-dockerfile","text":"FROM ubuntu:18.04 as base RUN apt-get update -y && apt-get install -y ca-certificates ADD k3s/build/out/data.tar.gz /image RUN mkdir -p /image/etc/ssl/certs /image/run /image/var/run /image/tmp /image/lib/modules /image/lib/firmware && \\ cp /etc/ssl/certs/ca-certificates.crt /image/etc/ssl/certs/ca-certificates.crt RUN cd image/bin && \\ rm -f k3s && \\ ln -s k3s-server k3s FROM ubuntu:18.04 RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections RUN apt-get update -y && apt-get -y install gnupg2 curl # Install the NVIDIA CUDA drivers and Container Runtime RUN apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub RUN sh -c 'echo \"deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 /\" > /etc/apt/sources.list.d/cuda.list' RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add - RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list RUN apt-get update -y RUN apt-get -y install cuda-drivers nvidia-container-runtime COPY --from = base /image / RUN mkdir -p /etc && \\ echo 'hosts: files dns' > /etc/nsswitch.conf RUN chmod 1777 /tmp # Provide custom containerd configuration to configure the nvidia-container-runtime RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/ COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl # Deploy the nvidia driver plugin on startup RUN mkdir -p /var/lib/rancher/k3s/server/manifests COPY gpu.yaml /var/lib/rancher/k3s/server/manifests/gpu.yaml VOLUME /var/lib/kubelet VOLUME /var/lib/rancher/k3s VOLUME /var/lib/cni VOLUME /var/log ENV PATH = \" $PATH :/bin/aux\" ENTRYPOINT [ \"/bin/k3s\" ] CMD [ \"agent\" ] This Dockerfile is based on the K3S Dockerfile . The following changes are applied: Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime Add a manifest for the NVIDIA driver plugin for Kubernetes","title":"Adapt the Dockerfile"},{"location":"usage/guides/cuda/#configure-containerd","text":"We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a config.toml.tmpl file. More information can be found on the K3S site . [ plugins . opt ] path = \"{{ .NodeConfig.Containerd.Opt }}\" [ plugins . cri ] stream_server_address = \"127.0.0.1\" stream_server_port = \"10010\" {{ - if . IsRunningInUserNS }} disable_cgroup = true disable_apparmor = true restrict_oom_score_adj = true {{ end }} {{ - if . NodeConfig . AgentConfig . PauseImage }} sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\" {{ end }} {{ - if not . NodeConfig . NoFlannel }} [ plugins . cri . cni ] bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\" conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\" {{ end }} [ plugins . cri . containerd . runtimes . runc ] # ---- changed from ' io . containerd . runc . v2 ' for GPU support runtime_type = \"io.containerd.runtime.v1.linux\" # ---- added for GPU support [ plugins . linux ] runtime = \"nvidia-container-runtime\" {{ if . PrivateRegistryConfig }} {{ if . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors ]{{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Mirrors }} [ plugins . cri . registry . mirrors . \"{{$k}}\" ] endpoint = [{{ range $ i , $ j := $ v . Endpoints }}{{ if $ i }}, {{ end }}{{ printf \"%q\" .}}{{ end }}] {{ end }} {{ range $ k , $ v := . PrivateRegistryConfig . Configs }} {{ if $ v . Auth }} [ plugins . cri . registry . configs . \"{{$k}}\" . auth ] {{ if $ v . Auth . Username }} username = \"{{ $v.Auth.Username }}\" {{ end }} {{ if $ v . Auth . Password }} password = \"{{ $v.Auth.Password }}\" {{ end }} {{ if $ v . Auth . Auth }} auth = \"{{ $v.Auth.Auth }}\" {{ end }} {{ if $ v . Auth . IdentityToken }} identitytoken = \"{{ $v.Auth.IdentityToken }}\" {{ end }} {{ end }} {{ if $ v . TLS }} [ plugins . cri . registry . configs . \"{{$k}}\" . tls ] {{ if $ v . TLS . CAFile }} ca_file = \"{{ $v.TLS.CAFile }}\" {{ end }} {{ if $ v . TLS . CertFile }} cert_file = \"{{ $v.TLS.CertFile }}\" {{ end }} {{ if $ v . TLS . KeyFile }} key_file = \"{{ $v.TLS.KeyFile }}\" {{ end }} {{ end }} {{ end }} {{ end }}","title":"Configure containerd"},{"location":"usage/guides/cuda/#the-nvidia-device-plugin","text":"To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin . The device plugin is a deamonset and allows you to automatically: Expose the number of GPUs on each nodes of your cluster Keep track of the health of your GPUs Run GPU enabled containers in your Kubernetes cluster. apiVersion : apps/v1 kind : DaemonSet metadata : name : nvidia-device-plugin-daemonset namespace : kube-system spec : selector : matchLabels : name : nvidia-device-plugin-ds template : metadata : # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler # reserves resources for critical add-on pods so that they can be rescheduled after # a failure. This annotation works in tandem with the toleration below. annotations : scheduler.alpha.kubernetes.io/critical-pod : \"\" labels : name : nvidia-device-plugin-ds spec : tolerations : # Allow this pod to be rescheduled while the node is in \"critical add-ons only\" mode. # This, along with the annotation above marks this pod as a critical add-on. - key : CriticalAddonsOnly operator : Exists containers : - env : - name : DP_DISABLE_HEALTHCHECKS value : xids image : nvidia/k8s-device-plugin:1.11 name : nvidia-device-plugin-ctr securityContext : allowPrivilegeEscalation : true capabilities : drop : [ \"ALL\" ] volumeMounts : - name : device-plugin mountPath : /var/lib/kubelet/device-plugins volumes : - name : device-plugin hostPath : path : /var/lib/kubelet/device-plugins","title":"The NVIDIA device plugin"},{"location":"usage/guides/cuda/#build-the-k3s-image","text":"To build the custom image we need to build K3S because we need the generated output. Put the following files in a directory: Dockerfile config.toml.tmpl gpu.yaml build.sh cuda-vector-add.yaml The build.sh files takes the K3S git tag as argument, it defaults to v1.18.10+k3s1 . The script performs the following steps: pulls K3S builds K3S build the custom K3S Docker image The resulting image is tagged as k3s-gpu:. The version tag is the git tag but the \u2018+\u2019 sign is replaced with a \u2018-\u2018. build.sh : #!/bin/bash set -e cd $( dirname $0 ) K3S_TAG = \" ${ 1 :- v1 .18.10+k3s1 } \" IMAGE_TAG = \" ${ K3S_TAG /+/- } \" if [ -d k3s ] ; then rm -rf k3s fi git clone --depth 1 https://github.com/rancher/k3s.git -b $K3S_TAG cd k3s make cd .. docker build -t k3s-gpu: $IMAGE_TAG .","title":"Build the K3S image"},{"location":"usage/guides/cuda/#run-and-test-the-custom-image-with-docker","text":"You can run a container based on the new image with Docker: docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1 Deploy a test pod : docker cp cuda-vector-add.yaml k3s-gpu:/cuda-vector-add.yaml docker exec k3s-gpu kubectl apply -f /cuda-vector-add.yaml docker exec k3s-gpu kubectl logs cuda-vector-add","title":"Run and test the custom image with Docker"},{"location":"usage/guides/cuda/#run-and-test-the-custom-image-with-k3d","text":"Tou can use the image with k3d: k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all Deploy a test pod : kubectl apply -f cuda-vector-add.yaml kubectl logs cuda-vector-add","title":"Run and test the custom image with k3d"},{"location":"usage/guides/cuda/#known-issues","text":"This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide .","title":"Known issues"},{"location":"usage/guides/cuda/#acknowledgements","text":"Most of the information in this article was obtained from various sources: Add NVIDIA GPU support to k3s with containerd microk8s K3S","title":"Acknowledgements"},{"location":"usage/guides/exposing_services/","text":"Exposing Services \u00b6 1. via Ingress (recommended) \u00b6 In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p \"8081:80@loadbalancer\" --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file (redundant, as k3d cluster create already merges it into your default kubeconfig file) export KUBECONFIG = \" $( k3d kubeconfig write k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller # apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / pathType : Prefix backend : service : name : nginx port : number : 80 Curl it via localhost curl localhost:8081/ 2. via NodePort \u00b6 Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p \"8082:30080@agent[0]\" --agents 2 Note : Kubernetes\u2019 default NodePort range is 30000-32767 Note : You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p \"30000-32767:30000-32767@server[0]\" (See this video from @portainer ) Warning : Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system! \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#exposing-services","text":"","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#1-via-ingress-recommended","text":"In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p \"8081:80@loadbalancer\" --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file (redundant, as k3d cluster create already merges it into your default kubeconfig file) export KUBECONFIG = \" $( k3d kubeconfig write k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller # apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19 apiVersion : networking.k8s.io/v1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / pathType : Prefix backend : service : name : nginx port : number : 80 Curl it via localhost curl localhost:8081/","title":"1. via Ingress (recommended)"},{"location":"usage/guides/exposing_services/#2-via-nodeport","text":"Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p \"8082:30080@agent[0]\" --agents 2 Note : Kubernetes\u2019 default NodePort range is 30000-32767 Note : You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p \"30000-32767:30000-32767@server[0]\" (See this video from @portainer ) Warning : Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system! \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"2. via NodePort"},{"location":"usage/guides/registries/","text":"Registries \u00b6 Registries configuration file \u00b6 You can add registries by specifying them in a registries.yaml and mounting them at creation time: k3d cluster create mycluster --volume \"/home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" . This file is a regular k3s registries configuration file , and looks like this: mirrors : \"my.company.registry:5000\" : endpoint : - http://my.company.registry:5000 In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000 . Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates . Authenticated registries \u00b6 When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra Secure registries \u00b6 When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume \" ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" --volume \" ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem\" Using a local registry \u00b6 Using k3d-managed registries \u00b6 Just ported! The k3d-managed registry is available again as of k3d v4.0.0 (January 2021) Create a dedicated registry together with your cluster \u00b6 k3d cluster create mycluster --registry-create : This creates your cluster mycluster together with a registry container called k3d-mycluster-registry k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the registries.yaml file) the port, which the registry is listening on will be mapped to a random port on your host system Check the k3d command output or docker ps -f name = k3d-mycluster-registry to find the exposed port (let\u2019s use 12345 here) Pull some image (optional) docker pull alpine:latest , re-tag it to reference your newly created registry docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local and push it docker push k3d-mycluster-registry:12345/testimage:local Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null (creates a container that will not do anything but keep on running) Create a customized k3d-managed registry \u00b6 k3d registry create myregistry.localhost --port 5111 creates a new registry called myregistry.localhost (could be used with automatic resolution of *.localhost , see next section) k3d cluster create newcluster --registry-use k3d-myregistry.localhost:5111 (make sure you use the k3d- prefix here) creates a new cluster set up to us that registry continue with step 3 and 4 from the last section for testing Using your own (not k3d-managed) local registry \u00b6 You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry . Pushing to your local registry address \u00b6 As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1) Testing your registry \u00b6 You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: cat <= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates .","title":"Registries configuration file"},{"location":"usage/guides/registries/#authenticated-registries","text":"When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra","title":"Authenticated registries"},{"location":"usage/guides/registries/#secure-registries","text":"When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume \" ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" --volume \" ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem\"","title":"Secure registries"},{"location":"usage/guides/registries/#using-a-local-registry","text":"","title":"Using a local registry"},{"location":"usage/guides/registries/#using-k3d-managed-registries","text":"Just ported! The k3d-managed registry is available again as of k3d v4.0.0 (January 2021)","title":"Using k3d-managed registries"},{"location":"usage/guides/registries/#create-a-dedicated-registry-together-with-your-cluster","text":"k3d cluster create mycluster --registry-create : This creates your cluster mycluster together with a registry container called k3d-mycluster-registry k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the registries.yaml file) the port, which the registry is listening on will be mapped to a random port on your host system Check the k3d command output or docker ps -f name = k3d-mycluster-registry to find the exposed port (let\u2019s use 12345 here) Pull some image (optional) docker pull alpine:latest , re-tag it to reference your newly created registry docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local and push it docker push k3d-mycluster-registry:12345/testimage:local Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null (creates a container that will not do anything but keep on running)","title":"Create a dedicated registry together with your cluster"},{"location":"usage/guides/registries/#create-a-customized-k3d-managed-registry","text":"k3d registry create myregistry.localhost --port 5111 creates a new registry called myregistry.localhost (could be used with automatic resolution of *.localhost , see next section) k3d cluster create newcluster --registry-use k3d-myregistry.localhost:5111 (make sure you use the k3d- prefix here) creates a new cluster set up to us that registry continue with step 3 and 4 from the last section for testing","title":"Create a customized k3d-managed registry"},{"location":"usage/guides/registries/#using-your-own-not-k3d-managed-local-registry","text":"You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry .","title":"Using your own (not k3d-managed) local registry"},{"location":"usage/guides/registries/#pushing-to-your-local-registry-address","text":"As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1)","title":"Pushing to your local registry address"},{"location":"usage/guides/registries/#testing-your-registry","text":"You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: cat < https://k3d.io/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/commands/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/kubeconfig/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/multiserver/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/guides/exposing_services/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/guides/registries/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/guides/calico/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/usage/guides/cuda/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/internals/defaults/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/internals/networking/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/faq/faq/ - 2021-01-13 + 2021-01-14 daily https://k3d.io/faq/v1vsv3-comparison/ - 2021-01-13 + 2021-01-14 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index e65752165f1483f06344f72415ba908ca0e31be8..cbfd798850d83b4f76630d5515b4ffa95ba3a26f 100644 GIT binary patch literal 315 zcmV-B0mS|viwFo_HUMA(|8r?{Wo=<_E_iKh0L_)LZo?oDhIc=O<=w_kI<%ETwmd;Q zR}{|V6budrcKY_kQ7U!oRAD59`0sw*H~fPBcpFVQgC}(MP<2gPm0+b)X!W6be>>IB z)sYu{^oAgmsoBz@I)^a5GPW#BLl|Hb6E)6jM)I64#2zARo0p2OUS5>z>W!0pHB)k5 z_Nqb0dbWx`bwOctitpN8SGQf=J*DzPd$h>2uo?yrSFQw_ub%xe;EY4q2f^)G9;(Fg1$IsnN^S(|m?Ib*sv;PKD+>i84 N@o(}R5bD+j0038Xlg
      1Iv2Kp^z)SanTXmEfdT==8CAe|xQ; zsuM4UIT(Uire;US>Kx2iYxJ{l?5I;61lt87)1&)Command Tree
      k3d
      -  --verbose  # enable verbose (debug) logging (default: false)
      +  --verbose  # GLOBAL: enable verbose (debug) logging (default: false)
      +  --trace  # GLOBAL: enable super verbose logging (trace logging) (default: false)
         --version  # show k3d and k3s version
      -  -h, --help  # show help text
      -  version  # show k3d and k3s version
      -  help [COMMAND]  # show help text for any command
      -  completion [bash | zsh | (psh | powershell)]  # generate completion scripts for common shells
      +  -h, --help  # GLOBAL: show help text
      +
         cluster [CLUSTERNAME]  # default cluster name is 'k3s-default'
           create
      -      --api-port  # specify the port on which the cluster will be accessible (e.g. via kubectl)
      -      -i, --image  # specify which k3s image should be used for the nodes
      -      --k3s-agent-arg  # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)
      -      --k3s-server-arg  # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help)
      -      -s, --servers  # specify how many server nodes you want to create
      -      --network  # specify a network you want to connect to
      -      --no-hostip # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDN
      -      --no-image-volume  # disable the creation of a volume for storing images (used for the 'k3d load image' command)
      -      --no-lb # disable the creation of a LoadBalancer in front of the server nodes
      -      --no-rollback # disable the automatic rollback actions, if anything goes wrong
      -      -p, --port  # add some more port mappings
      -      --token  # specify a cluster token (default: auto-generated)
      -      --timeout  # specify a timeout, after which the cluster creation will be interrupted and changes rolled back
      -      --update-default-kubeconfig  # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true')
      -      --switch-context  # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context
      -      -v, --volume  # specify additional bind-mounts
      -      --wait  # enable waiting for all server nodes to be ready before returning
      -      -a, --agents  # specify how many agent nodes you want to create
      -      -e, --env  # add environment variables to the node containers
      +      -a, --agents  # specify how many agent nodes you want to create (integer, default: 0)
      +      --api-port  # specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random)
      +      -c, --config  # use a config file (format 'PATH')
      +      -e, --env  # add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)
      +      --gpus  # [from docker CLI] add GPU devices to the node containers (string, e.g. 'all')
      +      -i, --image  # specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)
      +      --k3s-agent-arg  # add additional arguments to the k3s agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)
      +      --k3s-server-arg  # add additional arguments to the k3s server (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help)
      +      --kubeconfig-switch-context  # (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true)
      +      --kubeconfig-update-default  # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true)
      +      -l, --label  # add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)
      +      --network  # specify an existing (docker) network you want to connect to (string)
      +      --no-hostip  # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false)
      +      --no-image-volume  # disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false)
      +      --no-lb  # disable the creation of a load balancer in front of the server nodes (default: false)
      +      --no-rollback  # disable the automatic rollback actions, if anything goes wrong (default: false)
      +      -p, --port  # add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times)
      +      --registry-create  # create a new (docker) registry dedicated for this cluster (default: false)
      +      --registry-use  # use an existing local (docker) registry with this cluster (string, use multiple times)
      +      -s, --servers  # specify how many server nodes you want to create (integer, default: 1)
      +      --token  # specify a cluster token (string, default: auto-generated)
      +      --timeout  # specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s')
      +      -v, --volume  # specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times)
      +      --wait  # enable waiting for all server nodes to be ready before returning (default: true)
           start CLUSTERNAME  # start a (stopped) cluster
      -      -a, --all  # start all clusters
      -      --wait  # wait for all servers and server-loadbalancer to be up before returning
      -      --timeout  # maximum waiting time for '--wait' before canceling/returning
      +      -a, --all  # start all clusters (default: false)
      +      --wait  # wait for all servers and server-loadbalancer to be up before returning (default: true)
      +      --timeout  # maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s')
           stop CLUSTERNAME  # stop a cluster
      -      -a, --all  # stop all clusters
      +      -a, --all  # stop all clusters (default: false)
           delete CLUSTERNAME  # delete an existing cluster
      -      -a, --all  # delete all existing clusters
      +      -a, --all  # delete all existing clusters (default: false)
           list [CLUSTERNAME [CLUSTERNAME ...]]
      -      --no-headers  # do not print headers
      -      --token  # show column with cluster tokens
      +      --no-headers  # do not print headers (default: false)
      +      --token  # show column with cluster tokens (default: false)
      +      -o, --output  # format the output (format: 'json|yaml')
      +  completion [bash | zsh | fish | (psh | powershell)]  # generate completion scripts for common shells
      +  config
      +    init  # write a default k3d config (as a starting point)
      +      -f, --force  # force overwrite target file (default: false)
      +      -o, --output  # file to write to (string, default "k3d-default.yaml")
      +  help [COMMAND]  # show help text for any command
      +  image
      +    import [IMAGE | ARCHIVE [IMAGE | ARCHIVE ...]]  # Load one or more images from the local runtime environment or tar-archives into k3d clusters
      +      -c, --cluster  # clusters to load the image into (string, use flag multiple times, default: k3s-default)
      +      -k, --keep-tarball  # do not delete the image tarball from the shared volume after completion (default: false)
      +  kubeconfig
      +    get (CLUSTERNAME [CLUSTERNAME ...] | --all) # get kubeconfig from cluster(s) and write it to stdout
      +      -a, --all  # get kubeconfigs from all clusters (default: false)
      +    merge | write (CLUSTERNAME [CLUSTERNAME ...] | --all)  # get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file
      +      -a, --all  # get kubeconfigs from all clusters (default: false)
      +      -s, --kubeconfig-switch-context  # switch current-context in kubeconfig to the new context (default: true)
      +      -d, --kubeconfig-merge-default  # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)
      +      -o, --output  # specify the output file where the kubeconfig should be written to (string)
      +      --overwrite  # [Careful!] forcefully overwrite the output file, ignoring existing contents (default: false)
      +      -u, --update  # update conflicting fields in existing kubeconfig (default: true)
         node
           create NODENAME  # Create new nodes (and add them to existing clusters)
      -      -c, --cluster  # specify the cluster that the node shall connect to
      -      -i, --image  # specify which k3s image should be used for the node(s)
      -          --replicas  # specify how many replicas you want to create with this spec
      -          --role  # specify the node role
      -      --wait  # wait for the node to be up and running before returning
      -      --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet
      +      -c, --cluster  # specify the cluster that the node shall connect to (string, default: k3s-default)
      +      -i, --image  # specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)
      +      --replicas  # specify how many replicas you want to create with this spec (integer, default: 1)
      +      --role  # specify the node role (string, format: 'agent|server', default: agent)
      +      --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s')
      +      --wait  # wait for the node to be up and running before returning (default: true)
           start NODENAME  # start a (stopped) node
           stop NODENAME # stop a node
           delete NODENAME  # delete an existing node
      -      -a, --all  # delete all existing nodes
      +      -a, --all  # delete all existing nodes (default: false)
           list NODENAME
      -      --no-headers  # do not print headers
      -  kubeconfig
      -    get (CLUSTERNAME [CLUSTERNAME ...] | --all) # get kubeconfig from cluster(s) and write it to stdout
      -      -a, --all  # get kubeconfigs from all clusters
      -    merge | write (CLUSTERNAME [CLUSTERNAME ...] | --all)  # get kubeconfig from cluster(s) and merge it/them into into a file in $HOME/.k3d (or whatever you specify via the flags)
      -      -a, --all  # get kubeconfigs from all clusters
      -          --output  # specify the output file where the kubeconfig should be written to
      -          --overwrite  # [Careful!] forcefully overwrite the output file, ignoring existing contents
      -      -s, --switch-context  # switch current-context in kubeconfig to the new context
      -      -u, --update  # update conflicting fields in existing kubeconfig (default: true)
      -      -d, --merge-default-kubeconfig  # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)
      -  image
      -    import [IMAGE | ARCHIVE [IMAGE | ARCHIVE ...]]  # Load one or more images from the local runtime environment or tar-archives into k3d clusters
      -      -c, --cluster  # clusters to load the image into
      -      -k, --keep-tarball  # do not delete the image tarball from the shared volume after completion
      +      --no-headers  # do not print headers (default: false)
      +  registry
      +    create REGISTRYNAME
      +      -i, --image  # specify image used for the registry (string, default: "docker.io/library/registry:2")
      +      -p, --port  # select host port to map to (format: '[HOST:]HOSTPORT', default: 'random')
      +    delete REGISTRYNAME
      +      -a, --all  # delete all existing registries (default: false)
      +    list [NAME [NAME...]]
      +      --no-headers  # disable table headers (default: false)
      +  version  # show k3d and k3s version
       
      @@ -591,7 +611,7 @@
      - Last update: November 23, 2020 + Last update: January 5, 2021
      diff --git a/usage/guides/cuda/build.sh b/usage/guides/cuda/build.sh index ae7126c6..ad804c4b 100644 --- a/usage/guides/cuda/build.sh +++ b/usage/guides/cuda/build.sh @@ -1,7 +1,7 @@ #!/bin/bash set -e cd $(dirname $0) - + K3S_TAG="${1:-v1.18.10+k3s1}" IMAGE_TAG="${K3S_TAG/+/-}" diff --git a/usage/guides/cuda/index.html b/usage/guides/cuda/index.html index e2c8daea..6f16378c 100644 --- a/usage/guides/cuda/index.html +++ b/usage/guides/cuda/index.html @@ -442,7 +442,7 @@
    • - Acknowledgements: + Acknowledgements
    • @@ -662,7 +662,7 @@
    • - Acknowledgements: + Acknowledgements
    • @@ -691,7 +691,7 @@ The K3S container itself also needs to run with this runtime. If you are using D

      Building a customized K3S image

      To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.

      Adapt the Dockerfile

      -

      FROM ubuntu:18.04 as base
      +
      FROM ubuntu:18.04 as base
       RUN apt-get update -y && apt-get install -y ca-certificates
       ADD k3s/build/out/data.tar.gz /image
       RUN mkdir -p /image/etc/ssl/certs /image/run /image/var/run /image/tmp /image/lib/modules /image/lib/firmware && \
      @@ -730,11 +730,13 @@ The K3S container itself also needs to run with this runtime. If you are using D
       ENTRYPOINT ["/bin/k3s"]
       CMD ["agent"]
       
      -This Dockerfile is based on the K3S Dockerfile. -The following changes are applied: -1. Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed -2. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime -3. Add a manifest for the NVIDIA driver plugin for Kubernetes

      +

      This Dockerfile is based on the K3S Dockerfile. +The following changes are applied:

      +
        +
      1. Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed
      2. +
      3. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime
      4. +
      5. Add a manifest for the NVIDIA driver plugin for Kubernetes
      6. +

      Configure containerd

      We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a config.toml.tmpl file. More information can be found on the K3S site.

      [plugins.opt]
      @@ -794,10 +796,12 @@ The following changes are applied:
       {{end}}
       

      The NVIDIA device plugin

      -

      To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a daemonset and allows you to automatically: -* Expose the number of GPUs on each nodes of your cluster -* Keep track of the health of your GPUs -* Run GPU enabled containers in your Kubernetes cluster.

      +

      To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a deamonset and allows you to automatically:

      +
        +
      • Expose the number of GPUs on each nodes of your cluster
      • +
      • Keep track of the health of your GPUs
      • +
      • Run GPU enabled containers in your Kubernetes cluster.
      • +
      apiVersion: apps/v1
       kind: DaemonSet
       metadata:
      @@ -842,18 +846,22 @@ The following changes are applied:
       

      Build the K3S image

      To build the custom image we need to build K3S because we need the generated output.

      -

      Put the following files in a directory: -* Dockerfile -* config.toml.tmpl -* gpu.yaml -* build.sh -* cuda-vector-add.yaml

      -

      The build.sh files takes the K3S git tag as argument, it defaults to v1.18.10+k3s1. The script performs the following steps: -* pulls K3S -* builds K3S -* build the custom K3S Docker image

      +

      Put the following files in a directory:

      + +

      The build.sh files takes the K3S git tag as argument, it defaults to v1.18.10+k3s1. The script performs the following steps:

      +
        +
      • pulls K3S
      • +
      • builds K3S
      • +
      • build the custom K3S Docker image
      • +

      The resulting image is tagged as k3s-gpu:<version tag>. The version tag is the git tag but the ‘+’ sign is replaced with a ‘-‘.

      -

      build.sh: +

      build.sh:

      #!/bin/bash
       set -e
       cd $(dirname $0)
      @@ -869,33 +877,35 @@ git clone --depth 1 https://github.com/rancher/k3s.git -b
       make
       cd ..
       docker build -t k3s-gpu:$IMAGE_TAG .
      -

      +

      Run and test the custom image with Docker

      -

      You can run a container based on the new image with Docker: +

      You can run a container based on the new image with Docker:

      docker run --name k3s-gpu -d --privileged --gpus all k3s-gpu:v1.18.10-k3s1
       
      -Deploy a test pod: +

      Deploy a test pod:

      docker cp cuda-vector-add.yaml k3s-gpu:/cuda-vector-add.yaml
      -docker exec k3s-gpu kubectl apply -f /cuda-vector-add.yaml
      -docker exec k3s-gpu kubectl logs cuda-vector-add
      -

      +docker exec k3s-gpu kubectl apply -f /cuda-vector-add.yaml +docker exec k3s-gpu kubectl logs cuda-vector-add +

    Run and test the custom image with k3d

    -

    Tou can use the image with k3d: +

    Tou can use the image with k3d:

    k3d cluster create --no-lb --image k3s-gpu:v1.18.10-k3s1 --gpus all
     
    -Deploy a test pod: +

    Deploy a test pod:

    kubectl apply -f cuda-vector-add.yaml
     kubectl logs cuda-vector-add
    -

    +

    Known issues

    • This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide.
    -

    Acknowledgements:

    -

    Most of the information in this article was obtained from various sources: -* Add NVIDIA GPU support to k3s with containerd -* microk8s -* K3S

    +

    Acknowledgements

    +

    Most of the information in this article was obtained from various sources:

    + diff --git a/usage/guides/exposing_services/index.html b/usage/guides/exposing_services/index.html index cdfe29f9..8aeb2bdc 100644 --- a/usage/guides/exposing_services/index.html +++ b/usage/guides/exposing_services/index.html @@ -340,8 +340,8 @@
    • - - 1. via Ingress + + 1. via Ingress (recommended)
    • @@ -544,8 +544,8 @@
      • - - 1. via Ingress + + 1. via Ingress (recommended)
      • @@ -575,7 +575,7 @@

        Exposing Services

        -

        1. via Ingress

        +

        In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system.

          @@ -598,7 +598,7 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
        1. -

          Get the kubeconfig file

          +

          Get the kubeconfig file (redundant, as k3d cluster create already merges it into your default kubeconfig file)

          export KUBECONFIG="$(k3d kubeconfig write k3s-default)"

        2. @@ -649,6 +649,7 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
        3. Note: You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]" (See this video from @portainer)

        4. +
        5. Warning: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!
      @@ -687,7 +688,7 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
      - Last update: November 10, 2020 + Last update: January 5, 2021
      diff --git a/usage/guides/registries/index.html b/usage/guides/registries/index.html index ccab66ef..d5b8ed82 100644 --- a/usage/guides/registries/index.html +++ b/usage/guides/registries/index.html @@ -388,15 +388,35 @@
      • - - Using the k3d registry + + Using k3d-managed registries + + + + +
      • + +
      • + + Using your own (not k3d-managed) local registry
      • @@ -640,15 +660,35 @@
        • - - Using the k3d registry + + Using k3d-managed registries + + + + +
        • + +
        • + + Using your own (not k3d-managed) local registry
        • @@ -742,12 +782,29 @@ For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem

          Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file:

          k3d cluster create --volume "${HOME}/.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml" --volume "${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem"

          Using a local registry

          -

          Using the k3d registry

          +

          Using k3d-managed registries

          -

          Not ported yet

          -

          The k3d-managed registry has not yet been ported from v1.x to v3.x

          +

          Just ported!

          +

          The k3d-managed registry is available again as of k3d v4.0.0 (January 2021)

          -

          Using your own local registry

          +

          Create a dedicated registry together with your cluster

          +
            +
          1. k3d cluster create mycluster --registry-create: This creates your cluster mycluster together with a registry container called k3d-mycluster-registry
              +
            • k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the registries.yaml file)
            • +
            • the port, which the registry is listening on will be mapped to a random port on your host system
            • +
            +
          2. +
          3. Check the k3d command output or docker ps -f name=k3d-mycluster-registry to find the exposed port (let’s use 12345 here)
          4. +
          5. Pull some image (optional) docker pull alpine:latest, re-tag it to reference your newly created registry docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local and push it docker push k3d-mycluster-registry:12345/testimage:local
          6. +
          7. Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null (creates a container that will not do anything but keep on running)
          8. +
          +

          Create a customized k3d-managed registry

          +
            +
          1. k3d registry create myregistry.localhost --port 5111 creates a new registry called myregistry.localhost (could be used with automatic resolution of *.localhost, see next section)
          2. +
          3. k3d cluster create newcluster --registry-use k3d-myregistry.localhost:5111 (make sure you use the k3d- prefix here) creates a new cluster set up to us that registry
          4. +
          5. continue with step 3 and 4 from the last section for testing
          6. +
          +

          Using your own (not k3d-managed) local registry

          You can start your own local registry it with some docker commands, like:

          docker volume create local_registry
           docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
          @@ -772,37 +829,35 @@ Otherwise, it’s installable using sudo apt install libnss-myhostname
           

        We will verify these two things for a local registry (located at registry.localhost:5000) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker’s documentation for this).

        First, we can download some image (like nginx) and push it to our local registry with:

        -

        ```shell script -docker pull nginx:latest +

        docker pull nginx:latest
         docker tag nginx:latest registry.localhost:5000/nginx:latest
         docker push registry.localhost:5000/nginx:latest
        -
        Then we can deploy a pod referencing this image to your cluster:
        -
        -```shell script
        -cat <<EOF | kubectl apply -f -
        -apiVersion: apps/v1
        -kind: Deployment
        -metadata:
        -  name: nginx-test-registry
        -  labels:
        -    app: nginx-test-registry
        -spec:
        -  replicas: 1
        -  selector:
        -    matchLabels:
        -      app: nginx-test-registry
        -  template:
        -    metadata:
        -      labels:
        -        app: nginx-test-registry
        -    spec:
        -      containers:
        -      - name: nginx-test-registry
        -        image: registry.localhost:5000/nginx:latest
        -        ports:
        -        - containerPort: 80
        -EOF
        -

        +
        +

        Then we can deploy a pod referencing this image to your cluster:

        +
        cat <<EOF | kubectl apply -f -
        +apiVersion: apps/v1
        +kind: Deployment
        +metadata:
        +  name: nginx-test-registry
        +  labels:
        +    app: nginx-test-registry
        +spec:
        +  replicas: 1
        +  selector:
        +    matchLabels:
        +      app: nginx-test-registry
        +  template:
        +    metadata:
        +      labels:
        +        app: nginx-test-registry
        +    spec:
        +      containers:
        +      - name: nginx-test-registry
        +        image: registry.localhost:5000/nginx:latest
        +        ports:
        +        - containerPort: 80
        +EOF
        +

        Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry".

        Configuring registries for k3s <= v0.9.1

        k3s servers below v0.9.1 do not recognize the registries.yaml file as described in @@ -847,7 +902,7 @@ sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"

        - Last update: October 2, 2020 + Last update: January 7, 2021
        diff --git a/usage/kubeconfig/index.html b/usage/kubeconfig/index.html index 3b152001..0323b1da 100644 --- a/usage/kubeconfig/index.html +++ b/usage/kubeconfig/index.html @@ -587,8 +587,8 @@

        Handling Kubeconfigs

        -

        By default, k3d won’t touch your kubeconfig without you telling it to do so. -To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways.

        +

        By default, k3d will update your default kubeconfig with your new cluster’s details and set the current-context to it (can be disabled). +To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways.

        What is the default kubeconfig?

        We determine the path of the used or default kubeconfig in two ways:

        1. Using the KUBECONFIG environment variable, if it specifies exactly one file
        2. @@ -604,16 +604,16 @@ To get a kubeconfig set up for you to connect to a k3d cluster, you can go diffe
        3. Note 2: alternatively you can use k3d kubeconfig get mycluster > some-file.yaml
      -
    • Update your default kubeconfig upon cluster creation
        -
      • k3d cluster create mycluster --update-kubeconfig
          -
        • Note: this won’t switch the current-context (append --switch-context to do so)
        • +
        • Update your default kubeconfig upon cluster creation (DEFAULT)
            +
          • k3d cluster create mycluster --kubeconfig-update-default
              +
            • Note: this won’t switch the current-context (append --kubeconfig-switch-context to do so)
        • Update your default kubeconfig after cluster creation
            -
          • k3d kubeconfig merge mycluster --merge-default-kubeconfig
              -
            • Note: this won’t switch the current-context (append --switch-context to do so)
            • +
            • k3d kubeconfig merge mycluster --kubeconfig-merge-default
                +
              • Note: this won’t switch the current-context (append --kubeconfig-switch-context to do so)
            @@ -631,16 +631,16 @@ To get a kubeconfig set up for you to connect to a k3d cluster, you can go diffe

            Switching the current context

            None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. -You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag.

            +You can switch the current-context directly with the kubeconfig merge command by adding the --kubeconfig-switch-context flag.

            Removing cluster details from the kubeconfig

            k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.

            Handling multiple clusters

            k3d kubeconfig merge let’s you specify one or more clusters via arguments or all via --all. -All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. +All kubeconfigs will then be merged into a single file if --kubeconfig-merge-default or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml) will be returned. -Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.

            +Note, that with multiple cluster specified, the --kubeconfig-switch-context flag will change the current context to the cluster which was last in the list.

            @@ -649,7 +649,7 @@ Note, that with multiple cluster specified, the --switch-context fl
            - Last update: August 5, 2020 + Last update: January 5, 2021