From f2e35a262b1002e26ef04b4db485bc000e658a3a Mon Sep 17 00:00:00 2001 From: iwilltry42 Date: Mon, 15 Apr 2024 06:14:29 +0100 Subject: [PATCH] Deployed e9babb74 to v5.6.3 with MkDocs 1.5.3 and mike 1.1.2 --- v5.6.3/search/search_index.json | 2 +- v5.6.3/sitemap.xml | 96 ++++---- v5.6.3/sitemap.xml.gz | Bin 518 -> 518 bytes v5.6.3/usage/advanced/cuda/Dockerfile | 45 ++-- v5.6.3/usage/advanced/cuda/build.sh | 12 +- v5.6.3/usage/advanced/cuda/config.toml.tmpl | 55 ----- .../usage/advanced/cuda/cuda-vector-add.yaml | 1 + .../cuda/device-plugin-daemonset.yaml | 44 ++-- v5.6.3/usage/advanced/cuda/index.html | 222 ++++++------------ 9 files changed, 163 insertions(+), 314 deletions(-) delete mode 100644 v5.6.3/usage/advanced/cuda/config.toml.tmpl diff --git a/v5.6.3/search/search_index.json b/v5.6.3/search/search_index.json index e607f535..2ca69b3e 100644 --- a/v5.6.3/search/search_index.json +++ b/v5.6.3/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":""},{"location":"#what-is-k3d","title":"What is k3d?","text":"

k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker.

k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.

Note: k3d is a community-driven project but it\u2019s not an official Rancher (SUSE) product. Sponsoring: To spend any significant amount of time improving k3d, we rely on sponsorships:

- GitHub Sponsors: - LiberaPay: - IssueHunt: https://issuehunt.io/r/k3d-io/k3d

View a quick demo

"},{"location":"#learning","title":"Learning","text":"

k3d demo repository: iwilltry42/k3d-demo

Featured use-cases include:

"},{"location":"#requirements","title":"Requirements","text":""},{"location":"#releases","title":"Releases","text":"Platform Stage Version Release Date Downloads so far GitHub Releases stable GitHub Releases latest Homebrew stable - - Chocolatey stable - - Scoop stable - -"},{"location":"#installation","title":"Installation","text":"

You have several options there:

"},{"location":"#install-script","title":"Install Script","text":""},{"location":"#install-current-latest-release","title":"Install current latest release","text":" "},{"location":"#install-specific-release","title":"Install specific release","text":"

Use the install script to grab a specific release (via TAG environment variable):

"},{"location":"#other-installers","title":"Other Installers","text":"Other Installation Methods "},{"location":"#quick-start","title":"Quick Start","text":"

Create a cluster named mycluster with just a single server node:

k3d cluster create mycluster\n

Use the new cluster with kubectl, e.g.:

kubectl get nodes\n
Getting the cluster\u2019s kubeconfig (included in k3d cluster create)

Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME/.kube/config) and directly switch to the new context:

k3d kubeconfig merge mycluster --kubeconfig-switch-context\n
"},{"location":"#connect","title":"Connect","text":"
  1. Join the Rancher community on slack via slack.rancher.io
  2. Go to rancher-users.slack.com and join our channel #k3d
  3. Start chatting
"},{"location":"#related-projects","title":"Related Projects","text":""},{"location":"design/concepts/","title":"Concepts","text":""},{"location":"design/concepts/#nodefilters","title":"Nodefilters","text":""},{"location":"design/concepts/#about","title":"About","text":"

Nodefilters are a concept in k3d to specify which nodes of a newly created cluster a condition or setting should apply to.

"},{"location":"design/concepts/#syntax","title":"Syntax","text":"

The overall syntax is @<group>:<subset>[:<suffix].

"},{"location":"design/concepts/#example","title":"Example","text":""},{"location":"design/defaults/","title":"Defaults","text":""},{"location":"design/defaults/#k3d-reserved-settings","title":"k3d reserved settings","text":"

When you create a K3s cluster in Docker using k3d, we make use of some K3s configuration options, making them \u201creserved\u201d for k3d. This means, that overriding those options with your own may break the cluster setup.

"},{"location":"design/defaults/#environment-variables","title":"Environment Variables","text":"

The following K3s environment variables are used to configure the cluster:

Variable K3d Default Configurable? K3S_URL https://$CLUSTERNAME-server-0:6443 no K3S_TOKEN random yes (--token) K3S_KUBECONFIG_OUTPUT /output/kubeconfig.yaml no"},{"location":"design/defaults/#k3d-loadbalancer","title":"k3d Loadbalancer","text":"

By default, k3d creates an Nginx loadbalancer alongside the clusters it creates to handle the port-forwarding. The loadbalancer can partly be configured using k3d-defined settings.

Nginx setting k3d default k3d setting proxy_timeout (default for all server stanzas) 600 (s) settings.defaultProxyTimeout worker_connections 1024 settings.workerConnections"},{"location":"design/defaults/#overrides","title":"Overrides","text":""},{"location":"design/defaults/#multiple-server-nodes","title":"Multiple server nodes","text":""},{"location":"design/defaults/#api-ports","title":"API-Ports","text":""},{"location":"design/defaults/#kubeconfig","title":"Kubeconfig","text":""},{"location":"design/defaults/#networking","title":"Networking","text":""},{"location":"design/networking/","title":"Networking","text":""},{"location":"design/networking/#introduction","title":"Introduction","text":"

By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.

"},{"location":"design/networking/#connecting-to-docker-internalpre-defined-networks","title":"Connecting to docker \u201cinternal\u201d/pre-defined networks","text":""},{"location":"design/networking/#host-network","title":"host network","text":"

When using the --network flag to connect to the host network (i.e. k3d cluster create --network host), you won\u2019t be able to create more than one server node. An edge case would be one server node (with agent disabled) and one agent node.

"},{"location":"design/networking/#bridge-network","title":"bridge network","text":"

By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.

"},{"location":"design/networking/#none-network","title":"none \u201cnetwork\u201d","text":"

Well.. this doesn\u2019t really make sense for k3d anyway \u00af\\_(\u30c4)_/\u00af

"},{"location":"design/project/","title":"Project Overview","text":""},{"location":"design/project/#about-this-page","title":"About This Page","text":"

On this page we\u2019ll try to give an overview of all the moving bits and pieces in k3d to ease contributions to the project.

"},{"location":"design/project/#directory-overview","title":"Directory Overview","text":""},{"location":"design/project/#packages-overview","title":"Packages Overview","text":""},{"location":"design/project/#anatomy-of-a-cluster","title":"Anatomy of a Cluster","text":"

By default, every k3d cluster consists of at least 2 containers (nodes):

  1. (optional, but default and strongly recommended) loadbalancer

    • image: ghcr.io/k3d-io/k3d-proxy, built from proxy/
    • purpose: proxy and load balance requests from the outside (i.e. most of the times your local host) to the cluster
      • by default, it e.g. proxies all the traffic for the Kubernetes API to port 6443 (default listening port of K3s) to all the server nodes in the cluster
      • can be used for multiple port-mappings to one or more nodes in your cluster
        • that way, port-mappings can also easily be added/removed after the cluster creation, as we can simply re-create the proxy without affecting cluster state
  2. (required, always present) primary server node

    • image: rancher/k3s, built from github.com/k3s-io/k3s
    • purpose: (initializing) server (formerly: master) node of the cluster
      • runs the K3s executable (which runs containerd, the Kubernetes API Server, etcd/sqlite, etc.): k3s server
      • in a multi-server setup, it initializes the cluster with an embedded etcd database (using the K3s --cluster-init flag)
  3. (optional) secondary server node(s)

    • image: rancher/k3s, built from github.com/k3s-io/k3s
  4. (optional) agent node(s)

    • image: rancher/k3s, built from github.com/k3s-io/k3s
    • purpose: running the K3s agent process (kubelet, etc.): k3s agent
"},{"location":"design/project/#automation-ci","title":"Automation (CI)","text":"

The k3d repository mainly leverages the following two CI systems:

"},{"location":"design/project/#documentation","title":"Documentation","text":"

The website k3d.io containing all the documentation for k3d is built using mkdocs, configured via the mkdocs.yml config file with all the content residing in the docs/ directory (Markdown). Use mkdocs serve in the repository root to build and serve the webpage locally. Some parts of the documentation are being auto-generated, like docs/usage/commands/ is auto-generated using Cobra\u2019s command docs generation functionality in docgen/.

"},{"location":"faq/compatibility/","title":"Compatibility","text":"

With each release, we test if k3d works with specific versions of Docker and K3s, to ensure, that at least the most recent versions of Docker and the active releases (i.e. non-EOL release channels, similar to Kubernetes) work properly with it. The tests happen automatically in GitHub Actions. Some versions of Docker and K3s are expected to fail with specific versions of k3d due to e.g. incompatible dependencies or missing features. We test a full cluster lifecycle with different K3s channels, meaning that the following list refers to the current latest version released under the given channel.

"},{"location":"faq/compatibility/#releases","title":"Releases","text":""},{"location":"faq/compatibility/#v540-26032022","title":"v5.4.0 - 26.03.2022","text":"

Test Workflow: https://github.com/k3d-io/k3d/actions/runs/2044325827

"},{"location":"faq/compatibility/#docker","title":"Docker","text":"

Expected to Fail with the following versions:

"},{"location":"faq/compatibility/#k3s","title":"K3s","text":"

Expected to Fail with the following versions:

"},{"location":"faq/compatibility/#v530-03022022","title":"v5.3.0 - 03.02.2022","text":""},{"location":"faq/compatibility/#docker_1","title":"Docker","text":"

Expected to Fail with the following versions:

"},{"location":"faq/compatibility/#k3s_1","title":"K3s","text":"

Expected to Fail with the following versions:

"},{"location":"faq/faq/","title":"FAQ","text":""},{"location":"faq/faq/#issues-with-btrfs","title":"Issues with BTRFS","text":""},{"location":"faq/faq/#issues-with-zfs","title":"Issues with ZFS","text":""},{"location":"faq/faq/#pods-evicted-due-to-lack-of-disk-space","title":"Pods evicted due to lack of disk space","text":""},{"location":"faq/faq/#restarting-a-multi-server-cluster-or-the-initializing-server-node-fails","title":"Restarting a multi-server cluster or the initializing server node fails","text":""},{"location":"faq/faq/#passing-additional-argumentsflags-to-k3s-and-on-to-eg-the-kube-apiserver","title":"Passing additional arguments/flags to k3s (and on to e.g. the kube-apiserver)","text":" "},{"location":"faq/faq/#how-to-access-services-like-a-database-running-on-my-docker-host-machine","title":"How to access services (like a database) running on my Docker Host Machine","text":""},{"location":"faq/faq/#running-behind-a-corporate-proxy","title":"Running behind a corporate proxy","text":"

Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue. Some can be fixed by passing the HTTP_PROXY environment variables to k3d, some have to be fixed in docker\u2019s daemon.json file and some are as easy as adding a volume mount.

"},{"location":"faq/faq/#pods-fail-to-start-x509-certificate-signed-by-unknown-authority","title":"Pods fail to start: x509: certificate signed by unknown authority","text":" "},{"location":"faq/faq/#spurious-pid-entries-in-proc-after-deleting-k3d-cluster-with-shared-mounts","title":"Spurious PID entries in /proc after deleting k3d cluster with shared mounts","text":""},{"location":"faq/faq/#solved-nodes-fail-to-start-or-get-stuck-in-notready-state-with-log-nf_conntrack_max-permission-denied","title":"[SOLVED] Nodes fail to start or get stuck in NotReady state with log nf_conntrack_max: permission denied","text":""},{"location":"faq/faq/#problem","title":"Problem","text":""},{"location":"faq/faq/#workaround","title":"Workaround","text":""},{"location":"faq/faq/#fix","title":"Fix","text":"

This is going to be fixed \u201cupstream\u201d in k3s itself in rancher/k3s#3337 and backported to k3s versions as low as v1.18.

"},{"location":"faq/faq/#dockerhub-pull-rate-limit","title":"DockerHub Pull Rate Limit","text":""},{"location":"faq/faq/#problem_1","title":"Problem","text":"

You\u2019re deploying something to the cluster using an image from DockerHub and the image fails to be pulled, with a 429 response code and a message saying You have reached your pull rate limit. You may increase the limit by authenticating and upgrading.

"},{"location":"faq/faq/#cause","title":"Cause","text":"

This is caused by DockerHub\u2019s pull rate limit (see https://docs.docker.com/docker-hub/download-rate-limit/), which limits pulls from unauthenticated/anonymous users to 100 pulls per hour and for authenticated users (not paying customers) to 200 pulls per hour (as of the time of writing).

"},{"location":"faq/faq/#solution","title":"Solution","text":"

a) use images from a private registry, e.g. configured as a pull-through cache for DockerHub b) use a different public registry without such limitations, if the same image is stored there c) authenticate containerd inside k3s/k3d to use your DockerHub user

"},{"location":"faq/faq/#c-authenticate-containerd-against-dockerhub","title":"(c) Authenticate Containerd against DockerHub","text":"
  1. Create a registry configuration file for containerd:

    # saved as e.g. $HOME/registries.yaml\nconfigs:\n  \"docker.io\":\n    auth:\n      username: \"$USERNAME\"\n      password: \"$PASSWORD\"\n
  2. Create a k3d cluster using that config:

    k3d cluster create --registry-config $HOME/registries.yaml\n
  3. Profit. That\u2019s it. In the test for this, we pulled the same image 120 times in a row (confirmed, that pull numbers went up), without being rate limited (as a non-paying, normal user)

"},{"location":"faq/faq/#longhorn-in-k3d","title":"Longhorn in k3d","text":""},{"location":"faq/faq/#problem_2","title":"Problem","text":"

Longhorn is not working when deployed in a K3s cluster spawned with k3d.

"},{"location":"faq/faq/#cause_1","title":"Cause","text":"

The container image of K3s is quite limited and doesn\u2019t contain the necessary libraries. Also, additional volume mounts and more would be required to get Longhorn up and running properly. So basically Longhorn does rely too much on the host OS to work properly in the dockerized environment without quite some modifications.

"},{"location":"faq/faq/#solution_1","title":"Solution","text":"

There are a few ways one can build a working image to use with k3d. See https://github.com/k3d-io/k3d/discussions/478 for more info.

"},{"location":"usage/commands/","title":"Command Tree","text":"
k3d\n  --verbose  # GLOBAL: enable verbose (debug) logging (default: false)\n  --trace  # GLOBAL: enable super verbose logging (trace logging) (default: false)\n  --version  # show k3d and k3s version\n  -h, --help  # GLOBAL: show help text\n\n  cluster [CLUSTERNAME]  # default cluster name is 'k3s-default'\n    create\n      -a, --agents  # specify how many agent nodes you want to create (integer, default: 0)\n      --agents-memory # specify memory limit for agent containers/nodes (unit, e.g. 1g)\n      --api-port  # specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random)\n      -c, --config  # use a config file (format 'PATH')\n      -e, --env  # add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)\n      --gpus  # [from docker CLI] add GPU devices to the node containers (string, e.g. 'all')\n      -i, --image  # specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)\n      --k3s-arg  # add additional arguments to the k3s server/agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help & https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)\n      --kubeconfig-switch-context  # (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true)\n      --kubeconfig-update-default  # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true)\n      -l, --label  # add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)\n      --network  # specify an existing (docker) network you want to connect to (string)\n      --no-hostip  # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false)\n      --no-image-volume  # disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false)\n      --no-lb  # disable the creation of a load balancer in front of the server nodes (default: false)\n      --no-rollback  # disable the automatic rollback actions, if anything goes wrong (default: false)\n      -p, --port  # add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times)\n      --registry-create  # create a new (docker) registry dedicated for this cluster (default: false)\n      --registry-use  # use an existing local (docker) registry with this cluster (string, use multiple times)\n      -s, --servers  # specify how many server nodes you want to create (integer, default: 1)\n      --servers-memory # specify memory limit for server containers/nodes (unit, e.g. 1g)\n      --token  # specify a cluster token (string, default: auto-generated)\n      --timeout  # specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s')\n      -v, --volume  # specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times)\n      --wait  # enable waiting for all server nodes to be ready before returning (default: true)\n    start CLUSTERNAME  # start a (stopped) cluster\n      -a, --all  # start all clusters (default: false)\n      --wait  # wait for all servers and server-loadbalancer to be up before returning (default: true)\n      --timeout  # maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s')\n    stop CLUSTERNAME  # stop a cluster\n      -a, --all  # stop all clusters (default: false)\n    delete CLUSTERNAME  # delete an existing cluster\n      -a, --all  # delete all existing clusters (default: false)\n    list [CLUSTERNAME [CLUSTERNAME ...]]\n      --no-headers  # do not print headers (default: false)\n      --token  # show column with cluster tokens (default: false)\n      -o, --output  # format the output (format: 'json|yaml')\n  completion [bash | zsh | fish | (psh | powershell)]  # generate completion scripts for common shells\n  config\n    init  # write a default k3d config (as a starting point)\n      -f, --force  # force overwrite target file (default: false)\n      -o, --output  # file to write to (string, default \"k3d-default.yaml\")\n  help [COMMAND]  # show help text for any command\n  image\n    import [IMAGE | ARCHIVE [IMAGE | ARCHIVE ...]]  # Load one or more images from the local runtime environment or tar-archives into k3d clusters\n      -c, --cluster  # clusters to load the image into (string, use flag multiple times, default: k3s-default)\n      -k, --keep-tarball  # do not delete the image tarball from the shared volume after completion (default: false)\n  kubeconfig\n    get (CLUSTERNAME [CLUSTERNAME ...] | --all) # get kubeconfig from cluster(s) and write it to stdout\n      -a, --all  # get kubeconfigs from all clusters (default: false)\n    merge | write (CLUSTERNAME [CLUSTERNAME ...] | --all)  # get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file\n      -a, --all  # get kubeconfigs from all clusters (default: false)\n      -s, --kubeconfig-switch-context  # switch current-context in kubeconfig to the new context (default: true)\n      -d, --kubeconfig-merge-default  # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)\n      -o, --output  # specify the output file where the kubeconfig should be written to (string)\n      --overwrite  # [Careful!] forcefully overwrite the output file, ignoring existing contents (default: false)\n      -u, --update  # update conflicting fields in existing kubeconfig (default: true)\n  node\n    create NODENAME  # Create new nodes (and add them to existing clusters)\n      -c, --cluster  # specify the cluster that the node shall connect to (string, default: k3s-default)\n      -i, --image  # specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)\n      --replicas  # specify how many replicas you want to create with this spec (integer, default: 1)\n      --role  # specify the node role (string, format: 'agent|server', default: agent)\n      --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s')\n      --wait  # wait for the node to be up and running before returning (default: true)\n    start NODENAME  # start a (stopped) node\n    stop NODENAME # stop a node\n    delete NODENAME  # delete an existing node\n      -a, --all  # delete all existing nodes (default: false)\n      -r, --registries  # also delete registries, as a special type of node (default: false)\n    list NODENAME\n      --no-headers  # do not print headers (default: false)\n  registry\n    create REGISTRYNAME\n      -i, --image  # specify image used for the registry (string, default: \"docker.io/library/registry:2\")\n      -p, --port  # select host port to map to (format: '[HOST:]HOSTPORT', default: 'random')\n    delete REGISTRYNAME\n      -a, --all  # delete all existing registries (default: false)\n    list [NAME [NAME...]]\n      --no-headers  # disable table headers (default: false)\n  version  # show k3d and k3s version\n
"},{"location":"usage/configfile/","title":"Using Config Files","text":"

The config file feature is available as of k3d v4.0.0

"},{"location":"usage/configfile/#introduction","title":"Introduction","text":"

Syntax & Semantics

The options defined in the config file are not 100% the same as the CLI flags. This concerns naming and style/usage/structure, e.g.

"},{"location":"usage/configfile/#usage","title":"Usage","text":"

Using a config file is as easy as putting it in a well-known place in your file system and then referencing it via flag:

"},{"location":"usage/configfile/#required-fields","title":"Required Fields","text":"

As of the time of writing this documentation, the config file only requires you to define two fields:

So this would be the minimal config file, which configures absolutely nothing:

apiVersion: k3d.io/v1alpha5\nkind: Simple\n
"},{"location":"usage/configfile/#config-options","title":"Config Options","text":"

The configuration options for k3d are continuously evolving and so is the config file (syntax) itself. Currently, the config file is still in an Alpha-State, meaning, that it is subject to change anytime (though we try to keep breaking changes low).

Validation via JSON-Schema

k3d uses a JSON-Schema to describe the expected format and fields of the configuration file. This schema is also used to validate a user-given config file. This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. here for v1alpha5) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.

"},{"location":"usage/configfile/#all-options-example","title":"All Options: Example","text":"

Since the config options and the config file are changing quite a bit, it\u2019s hard to keep track of all the supported config file settings, so here\u2019s an example showing all of them as of the time of writing:

# k3d configuration file, saved as e.g. /home/me/myk3dcluster.yaml\napiVersion: k3d.io/v1alpha5 # this will change in the future as we make everything more stable\nkind: Simple # internally, we also have a Cluster config, which is not yet available externally\nmetadata:\n  name: mycluster # name that you want to give to your cluster (will still be prefixed with `k3d-`)\nservers: 1 # same as `--servers 1`\nagents: 2 # same as `--agents 2`\nkubeAPI: # same as `--api-port myhost.my.domain:6445` (where the name would resolve to 127.0.0.1)\n  host: \"myhost.my.domain\" # important for the `server` setting in the kubeconfig\n  hostIP: \"127.0.0.1\" # where the Kubernetes API will be listening on\n  hostPort: \"6445\" # where the Kubernetes API listening port will be mapped to on your host system\nimage: rancher/k3s:v1.20.4-k3s1 # same as `--image rancher/k3s:v1.20.4-k3s1`\nnetwork: my-custom-net # same as `--network my-custom-net`\nsubnet: \"172.28.0.0/16\" # same as `--subnet 172.28.0.0/16`\ntoken: superSecretToken # same as `--token superSecretToken`\nvolumes: # repeatable flags are represented as YAML lists\n  - volume: /my/host/path:/path/in/node # same as `--volume '/my/host/path:/path/in/node@server:0;agent:*'`\n    nodeFilters:\n      - server:0\n      - agent:*\nports:\n  - port: 8080:80 # same as `--port '8080:80@loadbalancer'`\n    nodeFilters:\n      - loadbalancer\nenv:\n  - envVar: bar=baz # same as `--env 'bar=baz@server:0'`\n    nodeFilters:\n      - server:0\nregistries: # define how registries should be created or used\n  create: # creates a default registry to be used with the cluster; same as `--registry-create registry.localhost`\n    name: registry.localhost\n    host: \"0.0.0.0\"\n    hostPort: \"5000\"\n    proxy: # omit this to have a \"normal\" registry, set this to create a registry proxy (pull-through cache)\n      remoteURL: https://registry-1.docker.io # mirror the DockerHub registry\n      username: \"\" # unauthenticated\n      password: \"\" # unauthenticated\n    volumes:\n      - /some/path:/var/lib/registry # persist registry data locally\n  use:\n    - k3d-myotherregistry:5000 # some other k3d-managed registry; same as `--registry-use 'k3d-myotherregistry:5000'`\n  config: | # define contents of the `registries.yaml` file (or reference a file); same as `--registry-config /path/to/config.yaml`\n    mirrors:\n      \"my.company.registry\":\n        endpoint:\n          - http://my.company.registry:5000\nhostAliases: # /etc/hosts style entries to be injected into /etc/hosts in the node containers and in the NodeHosts section in CoreDNS\n  - ip: 1.2.3.4\n    hostnames: \n      - my.host.local\n      - that.other.local\n  - ip: 1.1.1.1\n    hostnames:\n      - cloud.flare.dns\noptions:\n  k3d: # k3d runtime settings\n    wait: true # wait for cluster to be usable before returning; same as `--wait` (default: true)\n    timeout: \"60s\" # wait timeout before aborting; same as `--timeout 60s`\n    disableLoadbalancer: false # same as `--no-lb`\n    disableImageVolume: false # same as `--no-image-volume`\n    disableRollback: false # same as `--no-Rollback`\n    loadbalancer:\n      configOverrides:\n        - settings.workerConnections=2048\n  k3s: # options passed on to K3s itself\n    extraArgs: # additional arguments passed to the `k3s server|agent` command; same as `--k3s-arg`\n      - arg: \"--tls-san=my.host.domain\"\n        nodeFilters:\n          - server:*\n    nodeLabels:\n      - label: foo=bar # same as `--k3s-node-label 'foo=bar@agent:1'` -> this results in a Kubernetes node label\n        nodeFilters:\n          - agent:1\n  kubeconfig:\n    updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)\n    switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)\n  runtime: # runtime (docker) specific options\n    gpuRequest: all # same as `--gpus all`\n    labels:\n      - label: bar=baz # same as `--runtime-label 'bar=baz@agent:1'` -> this results in a runtime (docker) container label\n        nodeFilters:\n          - agent:1\n    ulimits:\n      - name: nofile\n        soft: 26677\n        hard: 26677\n
"},{"location":"usage/configfile/#tips","title":"Tips","text":""},{"location":"usage/configfile/#config-file-vs-cli-flags","title":"Config File vs. CLI Flags","text":"

k3d uses Cobra and Viper for CLI and general config handling respectively. This automatically introduces a \u201cconfig option order of priority\u201d (precedence order):

Config Precedence Order

Source: spf13/viper#why-viper

Internal Setting > CLI Flag > Environment Variable > Config File > (k/v store >) Defaults

This means, that you can define e.g. a \u201cbase configuration file\u201d with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments. For example, you use the same config file to create three clusters which only have different names and kubeAPI (--api-port) settings.

"},{"location":"usage/configfile/#references","title":"References","text":""},{"location":"usage/exposing_services/","title":"Exposing Services","text":""},{"location":"usage/exposing_services/#1-via-ingress-recommended","title":"1. via Ingress (recommended)","text":"

In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system.

  1. Create a cluster, mapping the ingress port 80 to localhost:8081

    k3d cluster create --api-port 6550 -p \"8081:80@loadbalancer\" --agents 2

    Good to know

    • --api-port 6550 is not required for the example to work. It\u2019s used to have k3s\u2019s API-Server listening on port 6550 with that port mapped to the host system.
    • the port-mapping construct 8081:80@loadbalancer means: \u201cmap port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer\u201c
      • the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes
        • all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster
  2. Get the kubeconfig file (redundant, as k3d cluster create already merges it into your default kubeconfig file)

    export KUBECONFIG=\"$(k3d kubeconfig write k3s-default)\"

  3. Create a nginx deployment

    kubectl create deployment nginx --image=nginx

  4. Create a ClusterIP service for it

    kubectl create service clusterip nginx --tcp=80:80

  5. Create an ingress object for it by copying the following manifest to a file and applying with kubectl apply -f thatfile.yaml

    Note: k3s deploys traefik as the default ingress controller

    # apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: nginx\n  annotations:\n    ingress.kubernetes.io/ssl-redirect: \"false\"\nspec:\n  rules:\n  - http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: nginx\n            port:\n              number: 80\n
  6. Curl it via localhost

    curl localhost:8081/

"},{"location":"usage/exposing_services/#2-via-nodeport","title":"2. via NodePort","text":"
  1. Create a cluster, mapping the port 30080 from agent-0 to localhost:8082

    k3d cluster create mycluster -p \"8082:30080@agent:0\" --agents 2

    • Note 1: Kubernetes\u2019 default NodePort range is 30000-32767
    • Note 2: You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p \"30000-32767:30000-32767@server:0\" (See this video from @portainer)

      • Warning: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!

      \u2026 (Steps 2 and 3 like above) \u2026

  2. Create a NodePort service for it by copying the following manifest to a file and applying it with kubectl apply -f

    apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: nginx\n  name: nginx\nspec:\n  ports:\n  - name: 80-80\n    nodePort: 30080\n    port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    app: nginx\n  type: NodePort\n
  3. Curl it via localhost

    curl localhost:8082/

"},{"location":"usage/importing_images/","title":"Importing modes","text":""},{"location":"usage/importing_images/#auto","title":"Auto","text":"

Auto-determine whether to use direct or tools-node.

For remote container runtimes, tools-node is faster due to less network overhead, thus it is automatically selected for remote runtimes.

Otherwise direct is used.

"},{"location":"usage/importing_images/#direct","title":"Direct","text":"

Directly load the given images to the k3s nodes. No separate container is spawned, no intermediate files are written.

"},{"location":"usage/importing_images/#tools-node","title":"Tools Node","text":"

Start a k3d-tools container in the container runtime, copy images to that runtime, then load the images to k3s nodes from there.

"},{"location":"usage/k3s/","title":"K3s Features in k3d","text":"

K3s ships with lots of built-in features and services, some of which may only be used in \u201cnon-normal\u201d ways in k3d due to the fact that K3s is running in containers.

"},{"location":"usage/k3s/#general-k3s-documentation","title":"General: K3s documentation","text":""},{"location":"usage/k3s/#coredns","title":"CoreDNS","text":"

Cluster DNS service

"},{"location":"usage/k3s/#resources","title":"Resources","text":""},{"location":"usage/k3s/#coredns-in-k3d","title":"CoreDNS in k3d","text":"

Basically, CoreDNS works the same in k3d as it does in other clusters. One thing to note though is, that the default forward . /etc/resolv.conf configured in the Corefile doesn\u2019t work the same, as the /etc/resolv.conf file inside the K3s node containers is not the same as the one on your local machine.

"},{"location":"usage/k3s/#modifications","title":"Modifications","text":"

As of k3d v5.x, k3d injects entries to the NodeHosts (basically a hosts file similar to /etc/hosts in Linux, which is managed by K3s) to enable Pods in the cluster to resolve the names of other containers in the same docker network (cluster network) and a special entry called host.k3d.internal which resolves to the IP of the network gateway (can be used to e.g. resolve DNS queries using your local resolver). There\u2019s a PR in progress to make customizations easier (for k3d and for users): https://github.com/k3s-io/k3s/pull/4397

"},{"location":"usage/k3s/#local-path-provisioner","title":"local-path-provisioner","text":"

Dynamically provisioning persistent local storage with Kubernetes

"},{"location":"usage/k3s/#resources_1","title":"Resources","text":""},{"location":"usage/k3s/#local-path-provisioner-in-k3d","title":"local-path-provisioner in k3d","text":"

In k3d, the local paths that the local-path-provisioner uses (default is /var/lib/rancher/k3s/storage) lies inside the container\u2019s filesystem, meaning that by default it\u2019s not mapped somewhere e.g. in your user home directory for you to use. You\u2019d need to map some local directory to that path to easily use the files inside this path: add --volume $HOME/some/directory:/var/lib/rancher/k3s/storage@all to your k3d cluster create command.

"},{"location":"usage/k3s/#traefik","title":"Traefik","text":"

Kubernetes Ingress Controller

"},{"location":"usage/k3s/#resources_2","title":"Resources","text":""},{"location":"usage/k3s/#traefik-in-k3d","title":"Traefik in k3d","text":"

k3d runs K3s in containers, so you\u2019ll need to expose the http/https ports on your host to easily access Ingress resources in your cluster. We have a guide over here explaining how to do this, see

"},{"location":"usage/k3s/#servicelb-klipper-lb","title":"servicelb (klipper-lb)","text":"

Embedded service load balancer in Klipper Allows you to use services with type: LoadBalancer in K3s by creating tiny proxies that use hostPorts

"},{"location":"usage/k3s/#resources_3","title":"Resources","text":""},{"location":"usage/k3s/#servicelb-in-k3d","title":"servicelb in k3d","text":"

klipper-lb creates new pods that proxy traffic from hostPorts to the service ports of type: LoadBalancer. The hostPort in this case is a port in a K3s container, not your local host, so you\u2019d need to add the port-mapping via the --port flag when creating the cluster.

"},{"location":"usage/kubeconfig/","title":"Handling Kubeconfigs","text":"

By default, k3d will update your default kubeconfig with your new cluster\u2019s details and set the current-context to it (can be disabled). To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways.

What is the default kubeconfig?

We determine the path of the used or default kubeconfig in two ways:

  1. Using the KUBECONFIG environment variable, if it specifies exactly one file
  2. Using the default path (e.g. on Linux it\u2019s $HOME/.kube/config)
"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","title":"Getting the kubeconfig for a newly created cluster","text":"
  1. Create a new kubeconfig file after cluster creation

    • k3d kubeconfig write mycluster
      • Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml
      • Tip: Use it: export KUBECONFIG=$(k3d kubeconfig write mycluster)
      • Note 2: alternatively you can use k3d kubeconfig get mycluster > some-file.yaml
  2. Update your default kubeconfig upon cluster creation (DEFAULT)

    • k3d cluster create mycluster --kubeconfig-update-default
      • Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so)
  3. Update your default kubeconfig after cluster creation

    • k3d kubeconfig merge mycluster --kubeconfig-merge-default
      • Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so)
  4. Update a different kubeconfig after cluster creation

    • k3d kubeconfig merge mycluster --output some/other/file.yaml
      • Note: this won\u2019t switch the current-context
    • The file will be created if it doesn\u2019t exist

Switching the current context

None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --kubeconfig-switch-context flag.

"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","title":"Removing cluster details from the kubeconfig","text":"

k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.

"},{"location":"usage/kubeconfig/#handling-multiple-clusters","title":"Handling multiple clusters","text":"

k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all. All kubeconfigs will then be merged into a single file if --kubeconfig-merge-default or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml) will be returned. Note, that with multiple cluster specified, the --kubeconfig-switch-context flag will change the current context to the cluster which was last in the list.

"},{"location":"usage/multiserver/","title":"Creating multi-server clusters","text":"

Important note

For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes. (Read more on etcd quorum on etcd.io) At least 2 cores and 4GiB of RAM are recommended.

"},{"location":"usage/multiserver/#embedded-etcd","title":"Embedded etcd","text":"

Create a cluster with 3 server nodes using k3s\u2019 embedded etcd database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes.

k3d cluster create multiserver --servers 3\n
"},{"location":"usage/multiserver/#adding-server-nodes-to-a-running-cluster","title":"Adding server nodes to a running cluster","text":"

In theory (and also in practice in most cases), this is as easy as executing the following command:

k3d node create newserver --cluster multiserver --role server\n

There\u2019s a trap!

If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the etcd backend.

"},{"location":"usage/registries/","title":"Using Image Registries","text":""},{"location":"usage/registries/#registries-configuration-file","title":"Registries configuration file","text":"

You can add registries by specifying them in a registries.yaml and referencing it at creation time: k3d cluster create mycluster --registry-config \"/home/YOU/my-registries.yaml\".

This file is a regular k3s registries configuration file, and looks like this:

mirrors:\n  \"my.company.registry:5000\":\n    endpoint:\n      - http://my.company.registry:5000\n

In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000.

This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates.

"},{"location":"usage/registries/#registries-configuration-file-embedded-in-k3ds-simpleconfig","title":"Registries Configuration File embedded in k3d\u2019s SimpleConfig","text":"

If you\u2019re using a SimpleConfig file to configure your k3d cluster, you may as well embed the registries.yaml in there directly:

apiVersion: k3d.io/v1alpha5\nkind: Simple\nmetadata:\n  name: test\nservers: 1\nagents: 2\nregistries:\n  create: \n    name: myregistry\n  config: |\n    mirrors:\n      \"my.company.registry\":\n        endpoint:\n          - http://my.company.registry:5000\n

Here, the config for the k3d-managed registry, created by the create: {...} option will be merged with the config specified under config: |.

"},{"location":"usage/registries/#authenticated-registries","title":"Authenticated registries","text":"

When using authenticated registries, we can add the username and password in a configs section in the registries.yaml, like this:

mirrors:\n  my.company.registry:\n    endpoint:\n      - http://my.company.registry\n\nconfigs:\n  my.company.registry:\n    auth:\n      username: aladin\n      password: abracadabra\n
"},{"location":"usage/registries/#secure-registries","title":"Secure registries","text":"

When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry, you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem.

Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file. For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem, the registries.yaml will look like:

mirrors:\n  my.company.registry:\n    endpoint:\n      - https://my.company.registry\n\nconfigs:\n  my.company.registry:\n    tls:\n      # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory.\n      ca_file: \"/etc/ssl/certs/my-company-root.pem\"\n

Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file:

k3d cluster create \\\n  --volume \"${HOME}/.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" \\\n  --volume \"${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem\"\n
"},{"location":"usage/registries/#using-a-local-registry","title":"Using a local registry","text":""},{"location":"usage/registries/#preface-referencing-local-registries","title":"Preface: Referencing local registries","text":"

In the next sections, you\u2019re going to create a local registry (i.e. a container image registry running in a container in your docker host). That container will have a name, e.g. mycluster-registry. If you follow the guide closely (or definitely if you use the k3d-managed option), this name will be known to all the hosts (K3s containers) and workloads in your k3d cluster. However, you usually want to push images into that registry from your local machine, which does not know that name by default. Now you have a few options, including the following three:

  1. Use localhost: Since the container will have a port mapped to your local host, you can just directly reference it via e.g. localhost:12345, where 12345 is the mapped port
    • If you later pull the image from the registry, only the repository path (e.g. myrepo/myimage:mytag in mycluster-registry:5000/myrepo/myimage:mytag) matters to find your image in the targeted registry.
  2. Get your machine to know the container name: For this you can use the plain old hosts file (/etc/hosts on Unix systems and C:\\windows\\system32\\drivers\\etc\\hosts on Windows) by adding an entry like the following to the end of the file:

    127.0.0.1 mycluster-registry\n
  3. Use some special resolving magic: Tools like dnsmasq or nss-myhostname (see info box below) and others can setup your local resolver to directly resolve the registry name to 127.0.0.1.

nss-myhostname to resolve *.localhost

Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1. Otherwise, it\u2019s installable using sudo apt install libnss-myhostname.

"},{"location":"usage/registries/#using-k3d-managed-registries","title":"Using k3d-managed registries","text":""},{"location":"usage/registries/#create-a-dedicated-registry-together-with-your-cluster","title":"Create a dedicated registry together with your cluster","text":"
  1. k3d cluster create mycluster --registry-create mycluster-registry: This creates your cluster mycluster together with a registry container called mycluster-registry

    • k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the registries.yaml file)
    • the port, which the registry is listening on will be mapped to a random port on your host system
  2. Check the k3d command output or docker ps -f name=mycluster-registry to find the exposed port

  3. Test your registry
"},{"location":"usage/registries/#create-a-customized-k3d-managed-registry","title":"Create a customized k3d-managed registry","text":"
  1. k3d registry create myregistry.localhost --port 12345 creates a new registry called k3d-myregistry.localhost (could be used with automatic resolution of *.localhost, see next section - also, note the k3d- prefix that k3d adds to all resources it creates)
  2. k3d cluster create newcluster --registry-use k3d-myregistry.localhost:12345 (make sure you use the k3d- prefix here) creates a new cluster set up to use that registry
  3. Test your registry
"},{"location":"usage/registries/#using-your-own-not-k3d-managed-local-registry","title":"Using your own (not k3d-managed) local registry","text":"

We recommend using a k3d-managed registry, as it plays nicely together with k3d clusters, but here\u2019s also a guide to create your own (not k3d-managed) registry, if you need features or customizations, that k3d does not provide:

Using your own (not k3d-managed) local registry

You can start your own local registry it with some docker commands, like:

docker volume create local_registry\ndocker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 12345:5000 registry:2\n

These commands will start your registry container with name and port (on your host) registry.localhost:12345. In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost. And then you can test your local registry.

"},{"location":"usage/registries/#pushing-to-your-local-registry-address","title":"Pushing to your local registry address","text":"

See Preface

The information below has been addressed in the preface for this section.

"},{"location":"usage/registries/#testing-your-registry","title":"Testing your registry","text":"

You should test that you can

We will verify these two things for a local registry (located at k3d-registry.localhost:12345) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this).

Assumptions: In the following test cases, we assume that the registry name k3d-registry.localhost resolves to 127.0.0.1 in your local machine (see section preface for more details) and to the registry container IP for the k3d cluster nodes (K3s containers).

Note: as per the explanation in the preface, you could replace k3d-registry.localhost:12345 with localhost:12345 in the docker tag and docker push commands below (but not in the kubectl part!)

"},{"location":"usage/registries/#nginx-deployment","title":"Nginx Deployment","text":"

First, we can download some image (like nginx) and push it to our local registry with:

docker pull nginx:latest\ndocker tag nginx:latest k3d-registry.localhost:12345/nginx:latest\ndocker push k3d-registry.localhost:12345/nginx:latest\n

Then we can deploy a pod referencing this image to your cluster:

cat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-test-registry\n  labels:\n    app: nginx-test-registry\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: nginx-test-registry\n  template:\n    metadata:\n      labels:\n        app: nginx-test-registry\n    spec:\n      containers:\n      - name: nginx-test-registry\n        image: k3d-registry.localhost:12345/nginx:latest\n        ports:\n        - containerPort: 80\nEOF\n

Then you should check that the pod is running with kubectl get pods -l \"app=nginx-test-registry\".

"},{"location":"usage/registries/#alpine-pod","title":"Alpine Pod","text":"
  1. Pull the alpine image: docker pull alpine:latest
  2. re-tag it to reference your newly created registry: docker tag alpine:latest k3d-registry.localhost:12345/testimage:local
  3. push it: docker push k3d-registry.localhost:12345/testimage:local
  4. Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: kubectl run --image k3d-registry.localhost:12345/testimage:local testimage --command -- tail -f /dev/null
    • (creates a container that will not do anything but keep on running)
"},{"location":"usage/registries/#creating-a-registry-proxy-pull-through-registry","title":"Creating a registry proxy / pull-through registry","text":"
  1. Create a pull-through registry

    k3d registry create docker-io `# Create a registry named k3d-docker-io` \\\n  -p 5000 `# listening on local host port 5000` \\ \n  --proxy-remote-url https://registry-1.docker.io `# let it mirror the Docker Hub registry` \\\n  -v ~/.local/share/docker-io-registry:/var/lib/registry `# also persist the downloaded images on the device outside the container`\n
  2. Create registry.yml

    mirrors:\n  \"docker.io\":\n    endpoint:\n      - http://k3d-docker-io:5000\n
  3. Create a cluster and using the pull-through cache

    k3d cluster create cluster01 --registry-use k3d-docker-io:5000 --registry-config registry.yml\n
  4. After cluster01 ready, create another cluster with the same registry or rebuild the cluster, it will use the already locally cached images.

    k3d cluster create cluster02 --registry-use k3d-docker-io:5000 --registry-config registry.yml\n
"},{"location":"usage/registries/#creating-a-registry-proxy-pull-through-registry-via-configfile","title":"Creating a registry proxy / pull-through registry via configfile","text":"
  1. Create a config file, e.g. /home/me/test-regcache.yaml

    apiVersion: k3d.io/v1alpha5\nkind: Simple\nmetadata:\n  name: test-regcache\nregistries:\n  create:\n    name: docker-io # name of the registry container\n    proxy:\n      remoteURL: https://registry-1.docker.io # proxy DockerHub\n    volumes:\n      - /tmp/reg:/var/lib/registry # persist data locally in /tmp/reg\n  config: | # tell K3s to use this registry when pulling from DockerHub\n    mirrors:\n      \"docker.io\":\n        endpoint:\n          - http://docker-io:5000\n
  2. Create cluster from config:

    k3d cluster create -c /home/me/test-regcache.yaml\n
"},{"location":"usage/advanced/calico/","title":"Use Calico instead of Flannel","text":"

Network Policies

k3s comes with a controller that enforces network policies by default. You do not need to switch to Calico for network policies to be enforced. See https://github.com/k3s-io/k3s/issues/1308 for more information. The docs below assume you want to switch to Calico\u2019s policy engine, thus setting --disable-network-policy.

"},{"location":"usage/advanced/calico/#1-download-and-modify-the-calico-descriptor","title":"1. Download and modify the Calico descriptor","text":"

You can following the documentation

And then you have to change the ConfigMap calico-config. On the cni_network_config add the entry for allowing IP forwarding

\"container_settings\": {\n    \"allow_ip_forwarding\": true\n}\n

Or you can directly use this calico.yaml manifest

"},{"location":"usage/advanced/calico/#2-create-the-cluster-without-flannel-and-with-calico","title":"2. Create the cluster without flannel and with calico","text":"

On the k3s cluster creation :

So the command of the cluster creation is (when you are at root of the k3d repository)

k3d cluster create \"${clustername}\" \\\n  --k3s-arg '--flannel-backend=none@server:*' \\\n  --k3s-arg '--disable-network-policy' \\\n  --volume \"$(pwd)/docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml\"\n

In this example :

You can add other options, see.

The cluster will start without flannel and with Calico as CNI Plugin.

For watching for the pod(s) deployment

watch \"kubectl get pods -n kube-system\"    \n

You will have something like this at beginning (with the command line kubectl get pods -n kube-system)

NAME                                       READY   STATUS     RESTARTS   AGE\nhelm-install-traefik-pn84f                 0/1     Pending    0          3s\ncalico-node-97rx8                          0/1     Init:0/3   0          3s\nmetrics-server-7566d596c8-hwnqq            0/1     Pending    0          2s\ncalico-kube-controllers-58b656d69f-2z7cn   0/1     Pending    0          2s\nlocal-path-provisioner-6d59f47c7-rmswg     0/1     Pending    0          2s\ncoredns-8655855d6-cxtnr                    0/1     Pending    0          2s\n

And when it finish to start

NAME                                       READY   STATUS      RESTARTS   AGE\nmetrics-server-7566d596c8-hwnqq            1/1     Running     0          56s\ncalico-node-97rx8                          1/1     Running     0          57s\nhelm-install-traefik-pn84f                 0/1     Completed   1          57s\nsvclb-traefik-lmjr5                        2/2     Running     0          28s\ncalico-kube-controllers-58b656d69f-2z7cn   1/1     Running     0          56s\nlocal-path-provisioner-6d59f47c7-rmswg     1/1     Running     0          56s\ntraefik-758cd5fc85-x8p57                   1/1     Running     0          28s\ncoredns-8655855d6-cxtnr                    1/1     Running     0          56s\n

Note :

"},{"location":"usage/advanced/calico/#references","title":"References","text":""},{"location":"usage/advanced/cuda/","title":"Running CUDA workloads","text":"

If you want to run CUDA workloads on the K3s container you need to customize the container. CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime. The K3s container itself also needs to run with this runtime. If you are using Docker you can install the NVIDIA Container Toolkit.

"},{"location":"usage/advanced/cuda/#building-a-customized-k3s-image","title":"Building a customized K3s image","text":"

To get the NVIDIA container runtime in the K3s image you need to build your own K3s image. The native K3s image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.

"},{"location":"usage/advanced/cuda/#dockerfile","title":"Dockerfile","text":"

Dockerfile:

ARG K3S_TAG=\"v1.21.2-k3s1\"\nFROM rancher/k3s:$K3S_TAG as k3s\n\nFROM nvidia/cuda:11.2.0-base-ubuntu18.04\n\nARG NVIDIA_CONTAINER_RUNTIME_VERSION\nENV NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION\n\nRUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections\n\nRUN apt-get update && \\\n    apt-get -y install gnupg2 curl\n\n# Install NVIDIA Container Runtime\nRUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -\n\nRUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list\n\nRUN apt-get update && \\\n    apt-get -y install nvidia-container-runtime=${NVIDIA_CONTAINER_RUNTIME_VERSION}\n\nCOPY --from=k3s / /\n\nRUN mkdir -p /etc && \\\n    echo 'hosts: files dns' > /etc/nsswitch.conf\n\nRUN chmod 1777 /tmp\n\n# Provide custom containerd configuration to configure the nvidia-container-runtime\nRUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/\n\nCOPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl\n\n# Deploy the nvidia driver plugin on startup\nRUN mkdir -p /var/lib/rancher/k3s/server/manifests\n\nCOPY device-plugin-daemonset.yaml /var/lib/rancher/k3s/server/manifests/nvidia-device-plugin-daemonset.yaml\n\nVOLUME /var/lib/kubelet\nVOLUME /var/lib/rancher/k3s\nVOLUME /var/lib/cni\nVOLUME /var/log\n\nENV PATH=\"$PATH:/bin/aux\"\n\nENTRYPOINT [\"/bin/k3s\"]\nCMD [\"agent\"]\n

This Dockerfile is based on the K3s Dockerfile The following changes are applied:

  1. Change the base images to nvidia/cuda:11.2.0-base-ubuntu18.04 so the NVIDIA Container Runtime can be installed. The version of cuda:xx.x.x must match the one you\u2019re planning to use.
  2. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime
  3. Add a manifest for the NVIDIA driver plugin for Kubernetes
"},{"location":"usage/advanced/cuda/#configure-containerd","title":"Configure containerd","text":"

We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3s provides a way to do this using a config.toml.tmpl file. More information can be found on the K3s site.

[plugins.opt]\n  path = \"{{ .NodeConfig.Containerd.Opt }}\"\n\n[plugins.cri]\n  stream_server_address = \"127.0.0.1\"\n  stream_server_port = \"10010\"\n\n{{- if .IsRunningInUserNS }}\n  disable_cgroup = true\n  disable_apparmor = true\n  restrict_oom_score_adj = true\n{{end}}\n\n{{- if .NodeConfig.AgentConfig.PauseImage }}\n  sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\"\n{{end}}\n\n{{- if not .NodeConfig.NoFlannel }}\n[plugins.cri.cni]\n  bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\"\n  conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\"\n{{end}}\n\n[plugins.cri.containerd.runtimes.runc]\n  # ---- changed from 'io.containerd.runc.v2' for GPU support\n  runtime_type = \"io.containerd.runtime.v1.linux\"\n\n# ---- added for GPU support\n[plugins.linux]\n  runtime = \"nvidia-container-runtime\"\n\n{{ if .PrivateRegistryConfig }}\n{{ if .PrivateRegistryConfig.Mirrors }}\n[plugins.cri.registry.mirrors]{{end}}\n{{range $k, $v := .PrivateRegistryConfig.Mirrors }}\n[plugins.cri.registry.mirrors.\"{{$k}}\"]\n  endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf \"%q\" .}}{{end}}]\n{{end}}\n\n{{range $k, $v := .PrivateRegistryConfig.Configs }}\n{{ if $v.Auth }}\n[plugins.cri.registry.configs.\"{{$k}}\".auth]\n  {{ if $v.Auth.Username }}username = \"{{ $v.Auth.Username }}\"{{end}}\n  {{ if $v.Auth.Password }}password = \"{{ $v.Auth.Password }}\"{{end}}\n  {{ if $v.Auth.Auth }}auth = \"{{ $v.Auth.Auth }}\"{{end}}\n  {{ if $v.Auth.IdentityToken }}identitytoken = \"{{ $v.Auth.IdentityToken }}\"{{end}}\n{{end}}\n{{ if $v.TLS }}\n[plugins.cri.registry.configs.\"{{$k}}\".tls]\n  {{ if $v.TLS.CAFile }}ca_file = \"{{ $v.TLS.CAFile }}\"{{end}}\n  {{ if $v.TLS.CertFile }}cert_file = \"{{ $v.TLS.CertFile }}\"{{end}}\n  {{ if $v.TLS.KeyFile }}key_file = \"{{ $v.TLS.KeyFile }}\"{{end}}\n{{end}}\n{{end}}\n{{end}}\n
"},{"location":"usage/advanced/cuda/#the-nvidia-device-plugin","title":"The NVIDIA device plugin","text":"

To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a deamonset and allows you to automatically:

apiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: nvidia-device-plugin-daemonset\n  namespace: kube-system\nspec:\n  selector:\n    matchLabels:\n      name: nvidia-device-plugin-ds\n  template:\n    metadata:\n      # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler\n      # reserves resources for critical add-on pods so that they can be rescheduled after\n      # a failure.  This annotation works in tandem with the toleration below.\n      annotations:\n        scheduler.alpha.kubernetes.io/critical-pod: \"\"\n      labels:\n        name: nvidia-device-plugin-ds\n    spec:\n      tolerations:\n      # Allow this pod to be rescheduled while the node is in \"critical add-ons only\" mode.\n      # This, along with the annotation above marks this pod as a critical add-on.\n      - key: CriticalAddonsOnly\n        operator: Exists\n      containers:\n      - env:\n        - name: DP_DISABLE_HEALTHCHECKS\n          value: xids\n        image: nvidia/k8s-device-plugin:1.11\n        name: nvidia-device-plugin-ctr\n        securityContext:\n          allowPrivilegeEscalation: true\n          capabilities:\n            drop: [\"ALL\"]\n        volumeMounts:\n          - name: device-plugin\n            mountPath: /var/lib/kubelet/device-plugins\n      volumes:\n        - name: device-plugin\n          hostPath:\n            path: /var/lib/kubelet/device-plugins\n
"},{"location":"usage/advanced/cuda/#build-the-k3s-image","title":"Build the K3s image","text":"

To build the custom image we need to build K3s because we need the generated output.

Put the following files in a directory:

The build.sh script is configured using exports & defaults to v1.21.2+k3s1. Please set at least the IMAGE_REGISTRY variable! The script performs the following steps builds the custom K3s image including the nvidia drivers.

build.sh:

#!/bin/bash\n\nset -euxo pipefail\n\nK3S_TAG=${K3S_TAG:=\"v1.21.2-k3s1\"} # replace + with -, if needed\nIMAGE_REGISTRY=${IMAGE_REGISTRY:=\"MY_REGISTRY\"}\nIMAGE_REPOSITORY=${IMAGE_REPOSITORY:=\"rancher/k3s\"}\nIMAGE_TAG=\"$K3S_TAG-cuda\"\nIMAGE=${IMAGE:=\"$IMAGE_REGISTRY/$IMAGE_REPOSITORY:$IMAGE_TAG\"}\n\nNVIDIA_CONTAINER_RUNTIME_VERSION=${NVIDIA_CONTAINER_RUNTIME_VERSION:=\"3.5.0-1\"}\n\necho \"IMAGE=$IMAGE\"\n\n# due to some unknown reason, copying symlinks fails with buildkit enabled\nDOCKER_BUILDKIT=0 docker build \\\n  --build-arg K3S_TAG=$K3S_TAG \\\n  --build-arg NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION \\\n  -t $IMAGE .\ndocker push $IMAGE\necho \"Done!\"\n
"},{"location":"usage/advanced/cuda/#run-and-test-the-custom-image-with-k3d","title":"Run and test the custom image with k3d","text":"

You can use the image with k3d:

k3d cluster create gputest --image=$IMAGE --gpus=1\n

Deploy a test pod:

kubectl apply -f cuda-vector-add.yaml\nkubectl logs cuda-vector-add\n

This should output something like the following:

$ kubectl logs cuda-vector-add\n\n[Vector addition of 50000 elements]\nCopy input data from the host memory to the CUDA device\nCUDA kernel launch with 196 blocks of 256 threads\nCopy output data from the CUDA device to the host memory\nTest PASSED\nDone\n

If the cuda-vector-add pod is stuck in Pending state, probably the device-driver daemonset didn\u2019t get deployed correctly from the auto-deploy manifests. In that case, you can apply it manually via kubectl apply -f device-plugin-daemonset.yaml.

"},{"location":"usage/advanced/cuda/#known-issues","title":"Known issues","text":""},{"location":"usage/advanced/cuda/#acknowledgements","title":"Acknowledgements","text":"

Most of the information in this article was obtained from various sources:

"},{"location":"usage/advanced/cuda/#authors","title":"Authors","text":""},{"location":"usage/advanced/podman/","title":"Using Podman instead of Docker","text":"

Podman has an Docker API compatibility layer. k3d uses the Docker API and is compatible with Podman v4 and higher.

Podman support is experimental

k3d is not guaranteed to work with Podman. If you find a bug, do help by filing an issue

Tested with podman version:

Client:       Podman Engine\nVersion:      4.3.1\nAPI Version:  4.3.1\n

"},{"location":"usage/advanced/podman/#using-podman","title":"Using Podman","text":"

Ensure the Podman system socket is available:

sudo systemctl enable --now podman.socket\n# or to start the socket daemonless\n# sudo podman system service --time=0 &\n

Disable timeout for podman service: See the podman-system-service (1) man page for more information.

mkdir -p /etc/containers/containers.conf.d\necho 'service_timeout=0' > /etc/containers/containers.conf.d/timeout.conf\n

To point k3d at the right Docker socket, create a symbolic link:

sudo ln -s /run/podman/podman.sock /var/run/docker.sock\n# or install your system podman-docker if available\nsudo k3d cluster create\n

Alternatively, set DOCKER_HOST when running k3d:

export DOCKER_HOST=unix:///run/podman/podman.sock\nexport DOCKER_SOCK=/run/podman/podman.sock\nsudo --preserve-env=DOCKER_HOST --preserve-env=DOCKER_SOCK k3d cluster create\n
"},{"location":"usage/advanced/podman/#using-rootless-podman","title":"Using rootless Podman","text":"

Ensure the Podman user socket is available:

systemctl --user enable --now podman.socket\n# or podman system service --time=0 &\n

Set DOCKER_HOST when running k3d:

XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)}\nexport DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock\nexport DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock\nk3d cluster create\n
"},{"location":"usage/advanced/podman/#using-cgroup-v2","title":"Using cgroup (v2)","text":"

By default, a non-root user can only get memory controller and pids controller to be delegated.

To run properly we need to enable CPU, CPUSET, and I/O delegation

Make sure you\u2019re running cgroup v2

If /sys/fs/cgroup/cgroup.controllers is present on your system, you are using v2, otherwise you are using v1.

mkdir -p /etc/systemd/system/user@.service.d\ncat > /etc/systemd/system/user@.service.d/delegate.conf <<EOF\n[Service]\nDelegate=cpu cpuset io memory pids\nEOF\nsystemctl daemon-reload\n

Reference: https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cpu-cpuset-and-io-delegation

"},{"location":"usage/advanced/podman/#using-remote-podman","title":"Using remote Podman","text":"

Start Podman on the remote host, and then set DOCKER_HOST when running k3d:

export DOCKER_HOST=ssh://username@hostname\nexport DOCKER_SOCK=/run/user/1000/podman/podman.sock\nk3d cluster create\n
"},{"location":"usage/advanced/podman/#macos","title":"macOS","text":"

Initialize a podman machine if not done already

podman machine init\n

Or start an already existing podman machine

podman machine start\n

Grab connection details

podman system connection ls\nName                         URI                                                         Identity                                      Default\npodman-machine-default       ssh://core@localhost:53685/run/user/501/podman/podman.sock  /Users/myusername/.ssh/podman-machine-default  true\npodman-machine-default-root  ssh://root@localhost:53685/run/podman/podman.sock           /Users/myusername/.ssh/podman-machine-default  false\n

Edit your OpenSSH config file to specify the IdentityFile

vim ~/.ssh/config\n\nHost localhost\n    IdentityFile /Users/myusername/.ssh/podman-machine-default\n
"},{"location":"usage/advanced/podman/#rootless-mode","title":"Rootless mode","text":"

Delegate the cpuset cgroup controller to the user\u2019s systemd slice, export the docker environment variables referenced above for the non-root connection, and create the cluster:

podman machine ssh bash -e <<EOF\n  printf '[Service]\\nDelegate=cpuset\\n' | sudo tee /etc/systemd/system/user@.service.d/k3d.conf\n  sudo systemctl daemon-reload\n  sudo systemctl restart \"user@\\${UID}\"\nEOF\n\nexport DOCKER_HOST=ssh://core@localhost:53685\nexport DOCKER_SOCKET=/run/user/501/podman/podman.sock\nk3d cluster create --k3s-arg '--kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*'\n
"},{"location":"usage/advanced/podman/#rootful-mode","title":"Rootful mode","text":"

Export the docker environment variables referenced above for the root connection and create the cluster:

export DOCKER_HOST=ssh://root@localhost:53685\nexport DOCKER_SOCK=/run/podman/podman.sock\nk3d cluster create\n
"},{"location":"usage/advanced/podman/#podman-network","title":"Podman network","text":"

The default podman network has dns disabled. To allow k3d cluster nodes to communicate with dns a new network must be created.

podman network create k3d\npodman network inspect k3d -f '{{ .DNSEnabled }}'\ntrue\n

"},{"location":"usage/advanced/podman/#creating-local-registries","title":"Creating local registries","text":"

Because Podman does not have a default \u201cbridge\u201d network, you have to specify a network using the --default-network flag when creating a local registry:

k3d registry create --default-network podman mycluster-registry\n

To use this registry with a cluster, pass the --registry-use flag:

k3d cluster create --registry-use mycluster-registry mycluster\n

Incompatibility with --registry-create

Because --registry-create assumes the default network to be \u201cbridge\u201d, avoid --registry-create when using Podman. Instead, always create a registry before creating a cluster.

Missing cpuset cgroup controller

If you experince an error regarding missing cpuset cgroup controller, ensure the user unit xdg-document-portal.service is disabled by running systemctl --user stop xdg-document-portal.service. See this issue

"},{"location":"usage/commands/k3d/","title":"K3d","text":""},{"location":"usage/commands/k3d/#k3d","title":"k3d","text":"

https://k3d.io/ -> Run k3s in Docker!

"},{"location":"usage/commands/k3d/#synopsis","title":"Synopsis","text":"

https://k3d.io/ k3d is a wrapper CLI that helps you to easily create k3s clusters inside docker. Nodes of a k3d cluster are docker containers running a k3s image. All Nodes of a k3d cluster are part of the same docker network.

k3d [flags]\n
"},{"location":"usage/commands/k3d/#options","title":"Options","text":"
  -h, --help         help for k3d\n      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n      --version      Show k3d and default k3s version\n
"},{"location":"usage/commands/k3d/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster/","title":"K3d cluster","text":""},{"location":"usage/commands/k3d_cluster/#k3d-cluster","title":"k3d cluster","text":"

Manage cluster(s)

"},{"location":"usage/commands/k3d_cluster/#synopsis","title":"Synopsis","text":"

Manage cluster(s)

k3d cluster [flags]\n
"},{"location":"usage/commands/k3d_cluster/#options","title":"Options","text":"
  -h, --help   help for cluster\n
"},{"location":"usage/commands/k3d_cluster/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_create/","title":"K3d cluster create","text":""},{"location":"usage/commands/k3d_cluster_create/#k3d-cluster-create","title":"k3d cluster create","text":"

Create a new cluster

"},{"location":"usage/commands/k3d_cluster_create/#synopsis","title":"Synopsis","text":"

Create a new k3s cluster with containerized nodes (k3s in docker). Every cluster will consist of one or more containers:

k3d cluster create NAME [flags]\n
"},{"location":"usage/commands/k3d_cluster_create/#options","title":"Options","text":"
  -a, --agents int                                                     Specify how many agents you want to create\n      --agents-memory string                                           Memory limit imposed on the agents nodes [From docker]\n      --api-port [HOST:]HOSTPORT                                       Specify the Kubernetes API server port exposed on the LoadBalancer (Format: [HOST:]HOSTPORT)\n                                                                        - Example: `k3d cluster create --servers 3 --api-port 0.0.0.0:6550`\n  -c, --config string                                                  Path of a config file to use\n  -e, --env KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]                   Add environment variables to nodes (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 -e \"HTTP_PROXY=my.proxy.com@server:0\" -e \"SOME_KEY=SOME_VAL@server:0\"`\n      --gpus string                                                    GPU devices to add to the cluster node containers ('all' to pass all GPUs) [From docker]\n  -h, --help                                                           help for create\n      --host-alias ip:host[,host,...]                                  Add ip:host[,host,...] mappings\n      --host-pid-mode                                                  Enable host pid mode of server(s) and agent(s)\n  -i, --image string                                                   Specify k3s image that you want to use for the nodes\n      --k3s-arg ARG@NODEFILTER[;@NODEFILTER]                           Additional args passed to k3s command (Format: ARG@NODEFILTER[;@NODEFILTER])\n                                                                        - Example: `k3d cluster create --k3s-arg \"--disable=traefik@server:0\"`\n      --k3s-node-label KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]        Add label to k3s node (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 --k3s-node-label \"my.label@agent:0,1\" --k3s-node-label \"other.label=somevalue@server:0\"`\n      --kubeconfig-switch-context                                      Directly switch the default kubeconfig's current-context to the new cluster's context (requires --kubeconfig-update-default) (default true)\n      --kubeconfig-update-default                                      Directly update the default kubeconfig with the new cluster's context (default true)\n      --lb-config-override strings                                     Use dotted YAML path syntax to override nginx loadbalancer settings\n      --network string                                                 Join an existing network\n      --no-image-volume                                                Disable the creation of a volume for importing images\n      --no-lb                                                          Disable the creation of a LoadBalancer in front of the server nodes\n      --no-rollback                                                    Disable the automatic rollback actions, if anything goes wrong\n  -p, --port [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]   Map ports from the node containers (via the serverlb) to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])\n                                                                        - Example: `k3d cluster create --agents 2 -p 8080:80@agent:0 -p 8081@agent:1`\n      --registry-config string                                         Specify path to an extra registries.yaml file\n      --registry-create NAME[:HOST][:HOSTPORT]                         Create a k3d-managed registry and connect it to the cluster (Format: NAME[:HOST][:HOSTPORT]\n                                                                        - Example: `k3d cluster create --registry-create mycluster-registry:0.0.0.0:5432`\n      --registry-use stringArray                                       Connect to one or more k3d-managed registries running locally\n      --runtime-label KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]         Add label to container runtime (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 --runtime-label \"my.label@agent:0,1\" --runtime-label \"other.label=somevalue@server:0\"`\n      --runtime-ulimit NAME[=SOFT]:[HARD]                              Add ulimit to container runtime (Format: NAME[=SOFT]:[HARD]\n                                                                        - Example: `k3d cluster create --agents 2 --runtime-ulimit \"nofile=1024:1024\" --runtime-ulimit \"noproc=1024:1024\"`\n  -s, --servers int                                                    Specify how many servers you want to create\n      --servers-memory string                                          Memory limit imposed on the server nodes [From docker]\n      --subnet 172.28.0.0/16                                           [Experimental: IPAM] Define a subnet for the newly created container network (Example: 172.28.0.0/16)\n      --timeout duration                                               Rollback changes if cluster couldn't be created in specified duration.\n      --token string                                                   Specify a cluster token. By default, we generate one.\n  -v, --volume [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]              Mount volumes into the nodes (Format: [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 -v /my/path@agent:0,1 -v /tmp/test:/tmp/other@server:0`\n      --wait                                                           Wait for the server(s) to be ready before returning. Use '--timeout DURATION' to not wait forever. (default true)\n
"},{"location":"usage/commands/k3d_cluster_create/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_create/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_delete/","title":"K3d cluster delete","text":""},{"location":"usage/commands/k3d_cluster_delete/#k3d-cluster-delete","title":"k3d cluster delete","text":"

Delete cluster(s).

"},{"location":"usage/commands/k3d_cluster_delete/#synopsis","title":"Synopsis","text":"

Delete cluster(s).

k3d cluster delete [NAME [NAME ...] | --all] [flags]\n
"},{"location":"usage/commands/k3d_cluster_delete/#options","title":"Options","text":"
  -a, --all             Delete all existing clusters\n  -c, --config string   Path of a config file to use\n  -h, --help            help for delete\n
"},{"location":"usage/commands/k3d_cluster_delete/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_delete/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_edit/","title":"K3d cluster edit","text":""},{"location":"usage/commands/k3d_cluster_edit/#k3d-cluster-edit","title":"k3d cluster edit","text":"

[EXPERIMENTAL] Edit cluster(s).

"},{"location":"usage/commands/k3d_cluster_edit/#synopsis","title":"Synopsis","text":"

[EXPERIMENTAL] Edit cluster(s).

k3d cluster edit CLUSTER [flags]\n
"},{"location":"usage/commands/k3d_cluster_edit/#options","title":"Options","text":"
  -h, --help                                                               help for edit\n      --port-add [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]   [EXPERIMENTAL] Map ports from the node containers (via the serverlb) to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])\n                                                                            - Example: `k3d node edit k3d-mycluster-serverlb --port-add 8080:80`\n
"},{"location":"usage/commands/k3d_cluster_edit/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_edit/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_list/","title":"K3d cluster list","text":""},{"location":"usage/commands/k3d_cluster_list/#k3d-cluster-list","title":"k3d cluster list","text":"

List cluster(s)

"},{"location":"usage/commands/k3d_cluster_list/#synopsis","title":"Synopsis","text":"

List cluster(s).

k3d cluster list [NAME [NAME...]] [flags]\n
"},{"location":"usage/commands/k3d_cluster_list/#options","title":"Options","text":"
  -h, --help            help for list\n      --no-headers      Disable headers\n  -o, --output string   Output format. One of: json|yaml\n      --token           Print k3s cluster token\n
"},{"location":"usage/commands/k3d_cluster_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_list/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_start/","title":"K3d cluster start","text":""},{"location":"usage/commands/k3d_cluster_start/#k3d-cluster-start","title":"k3d cluster start","text":"

Start existing k3d cluster(s)

"},{"location":"usage/commands/k3d_cluster_start/#synopsis","title":"Synopsis","text":"

Start existing k3d cluster(s)

k3d cluster start [NAME [NAME...] | --all] [flags]\n
"},{"location":"usage/commands/k3d_cluster_start/#options","title":"Options","text":"
  -a, --all                Start all existing clusters\n  -h, --help               help for start\n      --timeout duration   Maximum waiting time for '--wait' before canceling/returning.\n      --wait               Wait for the server(s) (and loadbalancer) to be ready before returning. (default true)\n
"},{"location":"usage/commands/k3d_cluster_start/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_start/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_stop/","title":"K3d cluster stop","text":""},{"location":"usage/commands/k3d_cluster_stop/#k3d-cluster-stop","title":"k3d cluster stop","text":"

Stop existing k3d cluster(s)

"},{"location":"usage/commands/k3d_cluster_stop/#synopsis","title":"Synopsis","text":"

Stop existing k3d cluster(s).

k3d cluster stop [NAME [NAME...] | --all] [flags]\n
"},{"location":"usage/commands/k3d_cluster_stop/#options","title":"Options","text":"
  -a, --all    Stop all existing clusters\n  -h, --help   help for stop\n
"},{"location":"usage/commands/k3d_cluster_stop/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_stop/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_completion/","title":"K3d completion","text":""},{"location":"usage/commands/k3d_completion/#k3d-completion","title":"k3d completion","text":"

Generate completion scripts for [bash, zsh, fish, powershell | psh]

"},{"location":"usage/commands/k3d_completion/#synopsis","title":"Synopsis","text":"

To load completions:

Bash:

$ source <(k3d completion bash)\n\n# To load completions for each session, execute once:\n# Linux:\n$ k3d completion bash > /etc/bash_completion.d/k3d\n# macOS:\n$ k3d completion bash > /usr/local/etc/bash_completion.d/k3d\n

Zsh:

# If shell completion is not already enabled in your environment,\n# you will need to enable it.  You can execute the following once:\n\n$ echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n\n# To load completions for each session, execute once:\n$ k3d completion zsh > \"${fpath[1]}/_k3d\"\n\n# You will need to start a new shell for this setup to take effect.\n

fish:

$ k3d completion fish | source\n\n# To load completions for each session, execute once:\n$ k3d completion fish > ~/.config/fish/completions/k3d.fish\n

PowerShell:

PS> k3d completion powershell | Out-String | Invoke-Expression\n\n# To load completions for every new session, run:\nPS> k3d completion powershell > k3d.ps1\n# and source this file from your PowerShell profile.\n
k3d completion SHELL\n
"},{"location":"usage/commands/k3d_completion/#options","title":"Options","text":"
  -h, --help   help for completion\n
"},{"location":"usage/commands/k3d_completion/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_completion/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_config/","title":"K3d config","text":""},{"location":"usage/commands/k3d_config/#k3d-config","title":"k3d config","text":"

Work with config file(s)

"},{"location":"usage/commands/k3d_config/#synopsis","title":"Synopsis","text":"

Work with config file(s)

k3d config [flags]\n
"},{"location":"usage/commands/k3d_config/#options","title":"Options","text":"
  -h, --help   help for config\n
"},{"location":"usage/commands/k3d_config/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_config/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_config_init/","title":"K3d config init","text":""},{"location":"usage/commands/k3d_config_init/#k3d-config-init","title":"k3d config init","text":"
k3d config init [flags]\n
"},{"location":"usage/commands/k3d_config_init/#options","title":"Options","text":"
  -f, --force           Force overwrite of target file\n  -h, --help            help for init\n  -o, --output string   Write a default k3d config (default \"k3d-default.yaml\")\n
"},{"location":"usage/commands/k3d_config_init/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_config_init/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_config_migrate/","title":"K3d config migrate","text":""},{"location":"usage/commands/k3d_config_migrate/#k3d-config-migrate","title":"k3d config migrate","text":"
k3d config migrate INPUT [OUTPUT] [flags]\n
"},{"location":"usage/commands/k3d_config_migrate/#options","title":"Options","text":"
  -h, --help   help for migrate\n
"},{"location":"usage/commands/k3d_config_migrate/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_config_migrate/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_image/","title":"K3d image","text":""},{"location":"usage/commands/k3d_image/#k3d-image","title":"k3d image","text":"

Handle container images.

"},{"location":"usage/commands/k3d_image/#synopsis","title":"Synopsis","text":"

Handle container images.

k3d image [flags]\n
"},{"location":"usage/commands/k3d_image/#options","title":"Options","text":"
  -h, --help   help for image\n
"},{"location":"usage/commands/k3d_image/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_image/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_image_import/","title":"K3d image import","text":""},{"location":"usage/commands/k3d_image_import/#k3d-image-import","title":"k3d image import","text":"

Import image(s) from docker into k3d cluster(s).

"},{"location":"usage/commands/k3d_image_import/#synopsis","title":"Synopsis","text":"

Import image(s) from docker into k3d cluster(s).

If an IMAGE starts with the prefix \u2018docker.io/\u2019, then this prefix is stripped internally. That is, \u2018docker.io/k3d-io/k3d-tools:latest\u2019 is treated as \u2018k3d-io/k3d-tools:latest\u2019.

If an IMAGE starts with the prefix \u2018library/\u2019 (or \u2018docker.io/library/\u2019), then this prefix is stripped internally. That is, \u2018library/busybox:latest\u2019 (or \u2018docker.io/library/busybox:latest\u2019) are treated as \u2018busybox:latest\u2019.

If an IMAGE does not have a version tag, then \u2018:latest\u2019 is assumed. That is, \u2018k3d-io/k3d-tools\u2019 is treated as \u2018k3d-io/k3d-tools:latest\u2019.

A file ARCHIVE always takes precedence. So if a file \u2018./k3d-io/k3d-tools\u2019 exists, k3d will try to import it instead of the IMAGE of the same name.

k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]\n
"},{"location":"usage/commands/k3d_image_import/#options","title":"Options","text":"
  -c, --cluster stringArray   Select clusters to load the image to. (default [k3s-default])\n  -h, --help                  help for import\n  -k, --keep-tarball          Do not delete the tarball containing the saved images from the shared volume\n  -t, --keep-tools            Do not delete the tools node after import\n  -m, --mode string           Which method to use to import images into the cluster [auto, direct, tools]. See https://k3d.io/stable/usage/importing_images/ (default \"tools-node\")\n
"},{"location":"usage/commands/k3d_image_import/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_image_import/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_kubeconfig/","title":"K3d kubeconfig","text":""},{"location":"usage/commands/k3d_kubeconfig/#k3d-kubeconfig","title":"k3d kubeconfig","text":"

Manage kubeconfig(s)

"},{"location":"usage/commands/k3d_kubeconfig/#synopsis","title":"Synopsis","text":"

Manage kubeconfig(s)

k3d kubeconfig [flags]\n
"},{"location":"usage/commands/k3d_kubeconfig/#options","title":"Options","text":"
  -h, --help   help for kubeconfig\n
"},{"location":"usage/commands/k3d_kubeconfig/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_kubeconfig/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_kubeconfig_get/","title":"K3d kubeconfig get","text":""},{"location":"usage/commands/k3d_kubeconfig_get/#k3d-kubeconfig-get","title":"k3d kubeconfig get","text":"

Print kubeconfig(s) from cluster(s).

"},{"location":"usage/commands/k3d_kubeconfig_get/#synopsis","title":"Synopsis","text":"

Print kubeconfig(s) from cluster(s).

k3d kubeconfig get [CLUSTER [CLUSTER [...]] | --all] [flags]\n
"},{"location":"usage/commands/k3d_kubeconfig_get/#options","title":"Options","text":"
  -a, --all    Output kubeconfigs from all existing clusters\n  -h, --help   help for get\n
"},{"location":"usage/commands/k3d_kubeconfig_get/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_kubeconfig_get/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_kubeconfig_merge/","title":"K3d kubeconfig merge","text":""},{"location":"usage/commands/k3d_kubeconfig_merge/#k3d-kubeconfig-merge","title":"k3d kubeconfig merge","text":"

Write/Merge kubeconfig(s) from cluster(s) into new or existing kubeconfig/file.

"},{"location":"usage/commands/k3d_kubeconfig_merge/#synopsis","title":"Synopsis","text":"

Write/Merge kubeconfig(s) from cluster(s) into new or existing kubeconfig/file.

k3d kubeconfig merge [CLUSTER [CLUSTER [...]] | --all] [flags]\n
"},{"location":"usage/commands/k3d_kubeconfig_merge/#options","title":"Options","text":"
  -a, --all                         Get kubeconfigs from all existing clusters\n  -h, --help                        help for merge\n  -d, --kubeconfig-merge-default    Merge into the default kubeconfig ($KUBECONFIG or /home/thklein/.kube/config)\n  -s, --kubeconfig-switch-context   Switch to new context (default true)\n  -o, --output string               Define output [ - | FILE ] (default from $KUBECONFIG or /home/thklein/.kube/config\n      --overwrite                   [Careful!] Overwrite existing file, ignoring its contents\n  -u, --update                      Update conflicting fields in existing kubeconfig (default true)\n
"},{"location":"usage/commands/k3d_kubeconfig_merge/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_kubeconfig_merge/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node/","title":"K3d node","text":""},{"location":"usage/commands/k3d_node/#k3d-node","title":"k3d node","text":"

Manage node(s)

"},{"location":"usage/commands/k3d_node/#synopsis","title":"Synopsis","text":"

Manage node(s)

k3d node [flags]\n
"},{"location":"usage/commands/k3d_node/#options","title":"Options","text":"
  -h, --help   help for node\n
"},{"location":"usage/commands/k3d_node/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_create/","title":"K3d node create","text":""},{"location":"usage/commands/k3d_node_create/#k3d-node-create","title":"k3d node create","text":"

Create a new k3s node in docker

"},{"location":"usage/commands/k3d_node_create/#synopsis","title":"Synopsis","text":"

Create a new containerized k3s node (k3s in docker).

k3d node create NAME [flags]\n
"},{"location":"usage/commands/k3d_node_create/#options","title":"Options","text":"
  -c, --cluster string           Cluster URL or k3d cluster name to connect to. (default \"k3s-default\")\n  -h, --help                     help for create\n  -i, --image string             Specify k3s image used for the node(s) (default: copied from existing node)\n      --k3s-arg stringArray      Additional args passed to k3d command\n      --k3s-node-label strings   Specify k3s node labels in format \"foo=bar\"\n      --memory string            Memory limit imposed on the node [From docker]\n  -n, --network strings          Add node to (another) runtime network\n      --replicas int             Number of replicas of this node specification. (default 1)\n      --role string              Specify node role [server, agent] (default \"agent\")\n      --runtime-label strings    Specify container runtime labels in format \"foo=bar\"\n      --runtime-ulimit strings   Specify container runtime ulimit in format \"ulimit=soft:hard\"\n      --timeout duration         Maximum waiting time for '--wait' before canceling/returning.\n  -t, --token string             Override cluster token (required when connecting to an external cluster)\n      --wait                     Wait for the node(s) to be ready before returning. (default true)\n
"},{"location":"usage/commands/k3d_node_create/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_create/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_delete/","title":"K3d node delete","text":""},{"location":"usage/commands/k3d_node_delete/#k3d-node-delete","title":"k3d node delete","text":"

Delete node(s).

"},{"location":"usage/commands/k3d_node_delete/#synopsis","title":"Synopsis","text":"

Delete node(s).

k3d node delete (NAME | --all) [flags]\n
"},{"location":"usage/commands/k3d_node_delete/#options","title":"Options","text":"
  -a, --all          Delete all existing nodes\n  -h, --help         help for delete\n  -r, --registries   Also delete registries\n
"},{"location":"usage/commands/k3d_node_delete/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_delete/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_edit/","title":"K3d node edit","text":""},{"location":"usage/commands/k3d_node_edit/#k3d-node-edit","title":"k3d node edit","text":"

[EXPERIMENTAL] Edit node(s).

"},{"location":"usage/commands/k3d_node_edit/#synopsis","title":"Synopsis","text":"

[EXPERIMENTAL] Edit node(s).

k3d node edit NODE [flags]\n
"},{"location":"usage/commands/k3d_node_edit/#options","title":"Options","text":"
  -h, --help                                                               help for edit\n      --port-add [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]   [EXPERIMENTAL] (serverlb only!) Map ports from the node container to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])\n                                                                            - Example: `k3d node edit k3d-mycluster-serverlb --port-add 8080:80`\n
"},{"location":"usage/commands/k3d_node_edit/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_edit/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_list/","title":"K3d node list","text":""},{"location":"usage/commands/k3d_node_list/#k3d-node-list","title":"k3d node list","text":"

List node(s)

"},{"location":"usage/commands/k3d_node_list/#synopsis","title":"Synopsis","text":"

List node(s).

k3d node list [NODE [NODE...]] [flags]\n
"},{"location":"usage/commands/k3d_node_list/#options","title":"Options","text":"
  -h, --help            help for list\n      --no-headers      Disable headers\n  -o, --output string   Output format. One of: json|yaml\n
"},{"location":"usage/commands/k3d_node_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_list/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_start/","title":"K3d node start","text":""},{"location":"usage/commands/k3d_node_start/#k3d-node-start","title":"k3d node start","text":"

Start an existing k3d node

"},{"location":"usage/commands/k3d_node_start/#synopsis","title":"Synopsis","text":"

Start an existing k3d node.

k3d node start NODE [flags]\n
"},{"location":"usage/commands/k3d_node_start/#options","title":"Options","text":"
  -h, --help   help for start\n
"},{"location":"usage/commands/k3d_node_start/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_start/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_stop/","title":"K3d node stop","text":""},{"location":"usage/commands/k3d_node_stop/#k3d-node-stop","title":"k3d node stop","text":"

Stop an existing k3d node

"},{"location":"usage/commands/k3d_node_stop/#synopsis","title":"Synopsis","text":"

Stop an existing k3d node.

k3d node stop NAME [flags]\n
"},{"location":"usage/commands/k3d_node_stop/#options","title":"Options","text":"
  -h, --help   help for stop\n
"},{"location":"usage/commands/k3d_node_stop/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_stop/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry/","title":"K3d registry","text":""},{"location":"usage/commands/k3d_registry/#k3d-registry","title":"k3d registry","text":"

Manage registry/registries

"},{"location":"usage/commands/k3d_registry/#synopsis","title":"Synopsis","text":"

Manage registry/registries

k3d registry [flags]\n
"},{"location":"usage/commands/k3d_registry/#options","title":"Options","text":"
  -h, --help   help for registry\n
"},{"location":"usage/commands/k3d_registry/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry_create/","title":"K3d registry create","text":""},{"location":"usage/commands/k3d_registry_create/#k3d-registry-create","title":"k3d registry create","text":"

Create a new registry

"},{"location":"usage/commands/k3d_registry_create/#synopsis","title":"Synopsis","text":"

Create a new registry.

k3d registry create NAME [flags]\n
"},{"location":"usage/commands/k3d_registry_create/#options","title":"Options","text":"
      --default-network string    Specify the network connected to the registry (default \"bridge\")\n  -h, --help                      help for create\n  -i, --image string              Specify image used for the registry (default \"docker.io/library/registry:2\")\n      --no-help                   Disable the help text (How-To use the registry)\n  -p, --port [HOST:]HOSTPORT      Select which port the registry should be listening on on your machine (localhost) (Format: [HOST:]HOSTPORT)\n                                   - Example: `k3d registry create --port 0.0.0.0:5111` (default \"random\")\n      --proxy-password string     Specify the password of the proxied remote registry\n      --proxy-remote-url string   Specify the url of the proxied remote registry\n      --proxy-username string     Specify the username of the proxied remote registry\n  -v, --volume [SOURCE:]DEST      Mount volumes into the registry node (Format: [SOURCE:]DEST\n
"},{"location":"usage/commands/k3d_registry_create/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry_create/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry_delete/","title":"K3d registry delete","text":""},{"location":"usage/commands/k3d_registry_delete/#k3d-registry-delete","title":"k3d registry delete","text":"

Delete registry/registries.

"},{"location":"usage/commands/k3d_registry_delete/#synopsis","title":"Synopsis","text":"

Delete registry/registries.

k3d registry delete (NAME | --all) [flags]\n
"},{"location":"usage/commands/k3d_registry_delete/#options","title":"Options","text":"
  -a, --all    Delete all existing registries\n  -h, --help   help for delete\n
"},{"location":"usage/commands/k3d_registry_delete/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry_delete/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry_list/","title":"K3d registry list","text":""},{"location":"usage/commands/k3d_registry_list/#k3d-registry-list","title":"k3d registry list","text":"

List registries

"},{"location":"usage/commands/k3d_registry_list/#synopsis","title":"Synopsis","text":"

List registries.

k3d registry list [NAME [NAME...]] [flags]\n
"},{"location":"usage/commands/k3d_registry_list/#options","title":"Options","text":"
  -h, --help            help for list\n      --no-headers      Disable headers\n  -o, --output string   Output format. One of: json|yaml\n
"},{"location":"usage/commands/k3d_registry_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry_list/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_version/","title":"K3d version","text":""},{"location":"usage/commands/k3d_version/#k3d-version","title":"k3d version","text":"

Show k3d and default k3s version

"},{"location":"usage/commands/k3d_version/#synopsis","title":"Synopsis","text":"

Show k3d and default k3s version

k3d version [flags]\n
"},{"location":"usage/commands/k3d_version/#options","title":"Options","text":"
  -h, --help            help for version\n  -o, --output string   This will return version information as a different format.  Only json is supported\n
"},{"location":"usage/commands/k3d_version/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_version/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_version_list/","title":"K3d version list","text":""},{"location":"usage/commands/k3d_version_list/#k3d-version-list","title":"k3d version list","text":"

List k3d/K3s versions. Component can be one of \u2018k3d\u2019, \u2018k3s\u2019, \u2018k3d-proxy\u2019, \u2018k3d-tools\u2019.

k3d version list COMPONENT [flags]\n
"},{"location":"usage/commands/k3d_version_list/#options","title":"Options","text":"
  -e, --exclude string   Exclude Regexp (default excludes pre-releases and arch-specific tags) (default \".+(rc|engine|alpha|beta|dev|test|arm|arm64|amd64).*\")\n  -f, --format string    [DEPRECATED] Use --output instead (default \"raw\")\n  -h, --help             help for list\n  -i, --include string   Include Regexp (default includes everything (default \".*\")\n  -l, --limit int        Limit number of tags in output (0 = unlimited)\n  -o, --output string    Output Format [raw | repo] (default \"raw\")\n  -s, --sort string      Sort Mode (asc | desc | off) (default \"desc\")\n
"},{"location":"usage/commands/k3d_version_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_version_list/#see-also","title":"SEE ALSO","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":""},{"location":"#what-is-k3d","title":"What is k3d?","text":"

k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker.

k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes.

Note: k3d is a community-driven project but it\u2019s not an official Rancher (SUSE) product. Sponsoring: To spend any significant amount of time improving k3d, we rely on sponsorships:

- GitHub Sponsors: - LiberaPay: - IssueHunt: https://issuehunt.io/r/k3d-io/k3d

View a quick demo

"},{"location":"#learning","title":"Learning","text":"

k3d demo repository: iwilltry42/k3d-demo

Featured use-cases include:

"},{"location":"#requirements","title":"Requirements","text":""},{"location":"#releases","title":"Releases","text":"Platform Stage Version Release Date Downloads so far GitHub Releases stable GitHub Releases latest Homebrew stable - - Chocolatey stable - - Scoop stable - -"},{"location":"#installation","title":"Installation","text":"

You have several options there:

"},{"location":"#install-script","title":"Install Script","text":""},{"location":"#install-current-latest-release","title":"Install current latest release","text":" "},{"location":"#install-specific-release","title":"Install specific release","text":"

Use the install script to grab a specific release (via TAG environment variable):

"},{"location":"#other-installers","title":"Other Installers","text":"Other Installation Methods "},{"location":"#quick-start","title":"Quick Start","text":"

Create a cluster named mycluster with just a single server node:

k3d cluster create mycluster\n

Use the new cluster with kubectl, e.g.:

kubectl get nodes\n
Getting the cluster\u2019s kubeconfig (included in k3d cluster create)

Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME/.kube/config) and directly switch to the new context:

k3d kubeconfig merge mycluster --kubeconfig-switch-context\n
"},{"location":"#connect","title":"Connect","text":"
  1. Join the Rancher community on slack via slack.rancher.io
  2. Go to rancher-users.slack.com and join our channel #k3d
  3. Start chatting
"},{"location":"#related-projects","title":"Related Projects","text":""},{"location":"design/concepts/","title":"Concepts","text":""},{"location":"design/concepts/#nodefilters","title":"Nodefilters","text":""},{"location":"design/concepts/#about","title":"About","text":"

Nodefilters are a concept in k3d to specify which nodes of a newly created cluster a condition or setting should apply to.

"},{"location":"design/concepts/#syntax","title":"Syntax","text":"

The overall syntax is @<group>:<subset>[:<suffix].

"},{"location":"design/concepts/#example","title":"Example","text":""},{"location":"design/defaults/","title":"Defaults","text":""},{"location":"design/defaults/#k3d-reserved-settings","title":"k3d reserved settings","text":"

When you create a K3s cluster in Docker using k3d, we make use of some K3s configuration options, making them \u201creserved\u201d for k3d. This means, that overriding those options with your own may break the cluster setup.

"},{"location":"design/defaults/#environment-variables","title":"Environment Variables","text":"

The following K3s environment variables are used to configure the cluster:

Variable K3d Default Configurable? K3S_URL https://$CLUSTERNAME-server-0:6443 no K3S_TOKEN random yes (--token) K3S_KUBECONFIG_OUTPUT /output/kubeconfig.yaml no"},{"location":"design/defaults/#k3d-loadbalancer","title":"k3d Loadbalancer","text":"

By default, k3d creates an Nginx loadbalancer alongside the clusters it creates to handle the port-forwarding. The loadbalancer can partly be configured using k3d-defined settings.

Nginx setting k3d default k3d setting proxy_timeout (default for all server stanzas) 600 (s) settings.defaultProxyTimeout worker_connections 1024 settings.workerConnections"},{"location":"design/defaults/#overrides","title":"Overrides","text":""},{"location":"design/defaults/#multiple-server-nodes","title":"Multiple server nodes","text":""},{"location":"design/defaults/#api-ports","title":"API-Ports","text":""},{"location":"design/defaults/#kubeconfig","title":"Kubeconfig","text":""},{"location":"design/defaults/#networking","title":"Networking","text":""},{"location":"design/networking/","title":"Networking","text":""},{"location":"design/networking/#introduction","title":"Introduction","text":"

By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.

"},{"location":"design/networking/#connecting-to-docker-internalpre-defined-networks","title":"Connecting to docker \u201cinternal\u201d/pre-defined networks","text":""},{"location":"design/networking/#host-network","title":"host network","text":"

When using the --network flag to connect to the host network (i.e. k3d cluster create --network host), you won\u2019t be able to create more than one server node. An edge case would be one server node (with agent disabled) and one agent node.

"},{"location":"design/networking/#bridge-network","title":"bridge network","text":"

By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.

"},{"location":"design/networking/#none-network","title":"none \u201cnetwork\u201d","text":"

Well.. this doesn\u2019t really make sense for k3d anyway \u00af\\_(\u30c4)_/\u00af

"},{"location":"design/project/","title":"Project Overview","text":""},{"location":"design/project/#about-this-page","title":"About This Page","text":"

On this page we\u2019ll try to give an overview of all the moving bits and pieces in k3d to ease contributions to the project.

"},{"location":"design/project/#directory-overview","title":"Directory Overview","text":""},{"location":"design/project/#packages-overview","title":"Packages Overview","text":""},{"location":"design/project/#anatomy-of-a-cluster","title":"Anatomy of a Cluster","text":"

By default, every k3d cluster consists of at least 2 containers (nodes):

  1. (optional, but default and strongly recommended) loadbalancer

    • image: ghcr.io/k3d-io/k3d-proxy, built from proxy/
    • purpose: proxy and load balance requests from the outside (i.e. most of the times your local host) to the cluster
      • by default, it e.g. proxies all the traffic for the Kubernetes API to port 6443 (default listening port of K3s) to all the server nodes in the cluster
      • can be used for multiple port-mappings to one or more nodes in your cluster
        • that way, port-mappings can also easily be added/removed after the cluster creation, as we can simply re-create the proxy without affecting cluster state
  2. (required, always present) primary server node

    • image: rancher/k3s, built from github.com/k3s-io/k3s
    • purpose: (initializing) server (formerly: master) node of the cluster
      • runs the K3s executable (which runs containerd, the Kubernetes API Server, etcd/sqlite, etc.): k3s server
      • in a multi-server setup, it initializes the cluster with an embedded etcd database (using the K3s --cluster-init flag)
  3. (optional) secondary server node(s)

    • image: rancher/k3s, built from github.com/k3s-io/k3s
  4. (optional) agent node(s)

    • image: rancher/k3s, built from github.com/k3s-io/k3s
    • purpose: running the K3s agent process (kubelet, etc.): k3s agent
"},{"location":"design/project/#automation-ci","title":"Automation (CI)","text":"

The k3d repository mainly leverages the following two CI systems:

"},{"location":"design/project/#documentation","title":"Documentation","text":"

The website k3d.io containing all the documentation for k3d is built using mkdocs, configured via the mkdocs.yml config file with all the content residing in the docs/ directory (Markdown). Use mkdocs serve in the repository root to build and serve the webpage locally. Some parts of the documentation are being auto-generated, like docs/usage/commands/ is auto-generated using Cobra\u2019s command docs generation functionality in docgen/.

"},{"location":"faq/compatibility/","title":"Compatibility","text":"

With each release, we test if k3d works with specific versions of Docker and K3s, to ensure, that at least the most recent versions of Docker and the active releases (i.e. non-EOL release channels, similar to Kubernetes) work properly with it. The tests happen automatically in GitHub Actions. Some versions of Docker and K3s are expected to fail with specific versions of k3d due to e.g. incompatible dependencies or missing features. We test a full cluster lifecycle with different K3s channels, meaning that the following list refers to the current latest version released under the given channel.

"},{"location":"faq/compatibility/#releases","title":"Releases","text":""},{"location":"faq/compatibility/#v540-26032022","title":"v5.4.0 - 26.03.2022","text":"

Test Workflow: https://github.com/k3d-io/k3d/actions/runs/2044325827

"},{"location":"faq/compatibility/#docker","title":"Docker","text":"

Expected to Fail with the following versions:

"},{"location":"faq/compatibility/#k3s","title":"K3s","text":"

Expected to Fail with the following versions:

"},{"location":"faq/compatibility/#v530-03022022","title":"v5.3.0 - 03.02.2022","text":""},{"location":"faq/compatibility/#docker_1","title":"Docker","text":"

Expected to Fail with the following versions:

"},{"location":"faq/compatibility/#k3s_1","title":"K3s","text":"

Expected to Fail with the following versions:

"},{"location":"faq/faq/","title":"FAQ","text":""},{"location":"faq/faq/#issues-with-btrfs","title":"Issues with BTRFS","text":""},{"location":"faq/faq/#issues-with-zfs","title":"Issues with ZFS","text":""},{"location":"faq/faq/#pods-evicted-due-to-lack-of-disk-space","title":"Pods evicted due to lack of disk space","text":""},{"location":"faq/faq/#restarting-a-multi-server-cluster-or-the-initializing-server-node-fails","title":"Restarting a multi-server cluster or the initializing server node fails","text":""},{"location":"faq/faq/#passing-additional-argumentsflags-to-k3s-and-on-to-eg-the-kube-apiserver","title":"Passing additional arguments/flags to k3s (and on to e.g. the kube-apiserver)","text":" "},{"location":"faq/faq/#how-to-access-services-like-a-database-running-on-my-docker-host-machine","title":"How to access services (like a database) running on my Docker Host Machine","text":""},{"location":"faq/faq/#running-behind-a-corporate-proxy","title":"Running behind a corporate proxy","text":"

Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue. Some can be fixed by passing the HTTP_PROXY environment variables to k3d, some have to be fixed in docker\u2019s daemon.json file and some are as easy as adding a volume mount.

"},{"location":"faq/faq/#pods-fail-to-start-x509-certificate-signed-by-unknown-authority","title":"Pods fail to start: x509: certificate signed by unknown authority","text":" "},{"location":"faq/faq/#spurious-pid-entries-in-proc-after-deleting-k3d-cluster-with-shared-mounts","title":"Spurious PID entries in /proc after deleting k3d cluster with shared mounts","text":""},{"location":"faq/faq/#solved-nodes-fail-to-start-or-get-stuck-in-notready-state-with-log-nf_conntrack_max-permission-denied","title":"[SOLVED] Nodes fail to start or get stuck in NotReady state with log nf_conntrack_max: permission denied","text":""},{"location":"faq/faq/#problem","title":"Problem","text":""},{"location":"faq/faq/#workaround","title":"Workaround","text":""},{"location":"faq/faq/#fix","title":"Fix","text":"

This is going to be fixed \u201cupstream\u201d in k3s itself in rancher/k3s#3337 and backported to k3s versions as low as v1.18.

"},{"location":"faq/faq/#dockerhub-pull-rate-limit","title":"DockerHub Pull Rate Limit","text":""},{"location":"faq/faq/#problem_1","title":"Problem","text":"

You\u2019re deploying something to the cluster using an image from DockerHub and the image fails to be pulled, with a 429 response code and a message saying You have reached your pull rate limit. You may increase the limit by authenticating and upgrading.

"},{"location":"faq/faq/#cause","title":"Cause","text":"

This is caused by DockerHub\u2019s pull rate limit (see https://docs.docker.com/docker-hub/download-rate-limit/), which limits pulls from unauthenticated/anonymous users to 100 pulls per hour and for authenticated users (not paying customers) to 200 pulls per hour (as of the time of writing).

"},{"location":"faq/faq/#solution","title":"Solution","text":"

a) use images from a private registry, e.g. configured as a pull-through cache for DockerHub b) use a different public registry without such limitations, if the same image is stored there c) authenticate containerd inside k3s/k3d to use your DockerHub user

"},{"location":"faq/faq/#c-authenticate-containerd-against-dockerhub","title":"(c) Authenticate Containerd against DockerHub","text":"
  1. Create a registry configuration file for containerd:

    # saved as e.g. $HOME/registries.yaml\nconfigs:\n  \"docker.io\":\n    auth:\n      username: \"$USERNAME\"\n      password: \"$PASSWORD\"\n
  2. Create a k3d cluster using that config:

    k3d cluster create --registry-config $HOME/registries.yaml\n
  3. Profit. That\u2019s it. In the test for this, we pulled the same image 120 times in a row (confirmed, that pull numbers went up), without being rate limited (as a non-paying, normal user)

"},{"location":"faq/faq/#longhorn-in-k3d","title":"Longhorn in k3d","text":""},{"location":"faq/faq/#problem_2","title":"Problem","text":"

Longhorn is not working when deployed in a K3s cluster spawned with k3d.

"},{"location":"faq/faq/#cause_1","title":"Cause","text":"

The container image of K3s is quite limited and doesn\u2019t contain the necessary libraries. Also, additional volume mounts and more would be required to get Longhorn up and running properly. So basically Longhorn does rely too much on the host OS to work properly in the dockerized environment without quite some modifications.

"},{"location":"faq/faq/#solution_1","title":"Solution","text":"

There are a few ways one can build a working image to use with k3d. See https://github.com/k3d-io/k3d/discussions/478 for more info.

"},{"location":"usage/commands/","title":"Command Tree","text":"
k3d\n  --verbose  # GLOBAL: enable verbose (debug) logging (default: false)\n  --trace  # GLOBAL: enable super verbose logging (trace logging) (default: false)\n  --version  # show k3d and k3s version\n  -h, --help  # GLOBAL: show help text\n\n  cluster [CLUSTERNAME]  # default cluster name is 'k3s-default'\n    create\n      -a, --agents  # specify how many agent nodes you want to create (integer, default: 0)\n      --agents-memory # specify memory limit for agent containers/nodes (unit, e.g. 1g)\n      --api-port  # specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random)\n      -c, --config  # use a config file (format 'PATH')\n      -e, --env  # add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)\n      --gpus  # [from docker CLI] add GPU devices to the node containers (string, e.g. 'all')\n      -i, --image  # specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)\n      --k3s-arg  # add additional arguments to the k3s server/agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help & https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)\n      --kubeconfig-switch-context  # (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true)\n      --kubeconfig-update-default  # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true)\n      -l, --label  # add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)\n      --network  # specify an existing (docker) network you want to connect to (string)\n      --no-hostip  # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false)\n      --no-image-volume  # disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false)\n      --no-lb  # disable the creation of a load balancer in front of the server nodes (default: false)\n      --no-rollback  # disable the automatic rollback actions, if anything goes wrong (default: false)\n      -p, --port  # add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times)\n      --registry-create  # create a new (docker) registry dedicated for this cluster (default: false)\n      --registry-use  # use an existing local (docker) registry with this cluster (string, use multiple times)\n      -s, --servers  # specify how many server nodes you want to create (integer, default: 1)\n      --servers-memory # specify memory limit for server containers/nodes (unit, e.g. 1g)\n      --token  # specify a cluster token (string, default: auto-generated)\n      --timeout  # specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s')\n      -v, --volume  # specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times)\n      --wait  # enable waiting for all server nodes to be ready before returning (default: true)\n    start CLUSTERNAME  # start a (stopped) cluster\n      -a, --all  # start all clusters (default: false)\n      --wait  # wait for all servers and server-loadbalancer to be up before returning (default: true)\n      --timeout  # maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s')\n    stop CLUSTERNAME  # stop a cluster\n      -a, --all  # stop all clusters (default: false)\n    delete CLUSTERNAME  # delete an existing cluster\n      -a, --all  # delete all existing clusters (default: false)\n    list [CLUSTERNAME [CLUSTERNAME ...]]\n      --no-headers  # do not print headers (default: false)\n      --token  # show column with cluster tokens (default: false)\n      -o, --output  # format the output (format: 'json|yaml')\n  completion [bash | zsh | fish | (psh | powershell)]  # generate completion scripts for common shells\n  config\n    init  # write a default k3d config (as a starting point)\n      -f, --force  # force overwrite target file (default: false)\n      -o, --output  # file to write to (string, default \"k3d-default.yaml\")\n  help [COMMAND]  # show help text for any command\n  image\n    import [IMAGE | ARCHIVE [IMAGE | ARCHIVE ...]]  # Load one or more images from the local runtime environment or tar-archives into k3d clusters\n      -c, --cluster  # clusters to load the image into (string, use flag multiple times, default: k3s-default)\n      -k, --keep-tarball  # do not delete the image tarball from the shared volume after completion (default: false)\n  kubeconfig\n    get (CLUSTERNAME [CLUSTERNAME ...] | --all) # get kubeconfig from cluster(s) and write it to stdout\n      -a, --all  # get kubeconfigs from all clusters (default: false)\n    merge | write (CLUSTERNAME [CLUSTERNAME ...] | --all)  # get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file\n      -a, --all  # get kubeconfigs from all clusters (default: false)\n      -s, --kubeconfig-switch-context  # switch current-context in kubeconfig to the new context (default: true)\n      -d, --kubeconfig-merge-default  # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)\n      -o, --output  # specify the output file where the kubeconfig should be written to (string)\n      --overwrite  # [Careful!] forcefully overwrite the output file, ignoring existing contents (default: false)\n      -u, --update  # update conflicting fields in existing kubeconfig (default: true)\n  node\n    create NODENAME  # Create new nodes (and add them to existing clusters)\n      -c, --cluster  # specify the cluster that the node shall connect to (string, default: k3s-default)\n      -i, --image  # specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)\n      --replicas  # specify how many replicas you want to create with this spec (integer, default: 1)\n      --role  # specify the node role (string, format: 'agent|server', default: agent)\n      --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s')\n      --wait  # wait for the node to be up and running before returning (default: true)\n    start NODENAME  # start a (stopped) node\n    stop NODENAME # stop a node\n    delete NODENAME  # delete an existing node\n      -a, --all  # delete all existing nodes (default: false)\n      -r, --registries  # also delete registries, as a special type of node (default: false)\n    list NODENAME\n      --no-headers  # do not print headers (default: false)\n  registry\n    create REGISTRYNAME\n      -i, --image  # specify image used for the registry (string, default: \"docker.io/library/registry:2\")\n      -p, --port  # select host port to map to (format: '[HOST:]HOSTPORT', default: 'random')\n    delete REGISTRYNAME\n      -a, --all  # delete all existing registries (default: false)\n    list [NAME [NAME...]]\n      --no-headers  # disable table headers (default: false)\n  version  # show k3d and k3s version\n
"},{"location":"usage/configfile/","title":"Using Config Files","text":"

The config file feature is available as of k3d v4.0.0

"},{"location":"usage/configfile/#introduction","title":"Introduction","text":"

Syntax & Semantics

The options defined in the config file are not 100% the same as the CLI flags. This concerns naming and style/usage/structure, e.g.

"},{"location":"usage/configfile/#usage","title":"Usage","text":"

Using a config file is as easy as putting it in a well-known place in your file system and then referencing it via flag:

"},{"location":"usage/configfile/#required-fields","title":"Required Fields","text":"

As of the time of writing this documentation, the config file only requires you to define two fields:

So this would be the minimal config file, which configures absolutely nothing:

apiVersion: k3d.io/v1alpha5\nkind: Simple\n
"},{"location":"usage/configfile/#config-options","title":"Config Options","text":"

The configuration options for k3d are continuously evolving and so is the config file (syntax) itself. Currently, the config file is still in an Alpha-State, meaning, that it is subject to change anytime (though we try to keep breaking changes low).

Validation via JSON-Schema

k3d uses a JSON-Schema to describe the expected format and fields of the configuration file. This schema is also used to validate a user-given config file. This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. here for v1alpha5) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.

"},{"location":"usage/configfile/#all-options-example","title":"All Options: Example","text":"

Since the config options and the config file are changing quite a bit, it\u2019s hard to keep track of all the supported config file settings, so here\u2019s an example showing all of them as of the time of writing:

# k3d configuration file, saved as e.g. /home/me/myk3dcluster.yaml\napiVersion: k3d.io/v1alpha5 # this will change in the future as we make everything more stable\nkind: Simple # internally, we also have a Cluster config, which is not yet available externally\nmetadata:\n  name: mycluster # name that you want to give to your cluster (will still be prefixed with `k3d-`)\nservers: 1 # same as `--servers 1`\nagents: 2 # same as `--agents 2`\nkubeAPI: # same as `--api-port myhost.my.domain:6445` (where the name would resolve to 127.0.0.1)\n  host: \"myhost.my.domain\" # important for the `server` setting in the kubeconfig\n  hostIP: \"127.0.0.1\" # where the Kubernetes API will be listening on\n  hostPort: \"6445\" # where the Kubernetes API listening port will be mapped to on your host system\nimage: rancher/k3s:v1.20.4-k3s1 # same as `--image rancher/k3s:v1.20.4-k3s1`\nnetwork: my-custom-net # same as `--network my-custom-net`\nsubnet: \"172.28.0.0/16\" # same as `--subnet 172.28.0.0/16`\ntoken: superSecretToken # same as `--token superSecretToken`\nvolumes: # repeatable flags are represented as YAML lists\n  - volume: /my/host/path:/path/in/node # same as `--volume '/my/host/path:/path/in/node@server:0;agent:*'`\n    nodeFilters:\n      - server:0\n      - agent:*\nports:\n  - port: 8080:80 # same as `--port '8080:80@loadbalancer'`\n    nodeFilters:\n      - loadbalancer\nenv:\n  - envVar: bar=baz # same as `--env 'bar=baz@server:0'`\n    nodeFilters:\n      - server:0\nregistries: # define how registries should be created or used\n  create: # creates a default registry to be used with the cluster; same as `--registry-create registry.localhost`\n    name: registry.localhost\n    host: \"0.0.0.0\"\n    hostPort: \"5000\"\n    proxy: # omit this to have a \"normal\" registry, set this to create a registry proxy (pull-through cache)\n      remoteURL: https://registry-1.docker.io # mirror the DockerHub registry\n      username: \"\" # unauthenticated\n      password: \"\" # unauthenticated\n    volumes:\n      - /some/path:/var/lib/registry # persist registry data locally\n  use:\n    - k3d-myotherregistry:5000 # some other k3d-managed registry; same as `--registry-use 'k3d-myotherregistry:5000'`\n  config: | # define contents of the `registries.yaml` file (or reference a file); same as `--registry-config /path/to/config.yaml`\n    mirrors:\n      \"my.company.registry\":\n        endpoint:\n          - http://my.company.registry:5000\nhostAliases: # /etc/hosts style entries to be injected into /etc/hosts in the node containers and in the NodeHosts section in CoreDNS\n  - ip: 1.2.3.4\n    hostnames: \n      - my.host.local\n      - that.other.local\n  - ip: 1.1.1.1\n    hostnames:\n      - cloud.flare.dns\noptions:\n  k3d: # k3d runtime settings\n    wait: true # wait for cluster to be usable before returning; same as `--wait` (default: true)\n    timeout: \"60s\" # wait timeout before aborting; same as `--timeout 60s`\n    disableLoadbalancer: false # same as `--no-lb`\n    disableImageVolume: false # same as `--no-image-volume`\n    disableRollback: false # same as `--no-Rollback`\n    loadbalancer:\n      configOverrides:\n        - settings.workerConnections=2048\n  k3s: # options passed on to K3s itself\n    extraArgs: # additional arguments passed to the `k3s server|agent` command; same as `--k3s-arg`\n      - arg: \"--tls-san=my.host.domain\"\n        nodeFilters:\n          - server:*\n    nodeLabels:\n      - label: foo=bar # same as `--k3s-node-label 'foo=bar@agent:1'` -> this results in a Kubernetes node label\n        nodeFilters:\n          - agent:1\n  kubeconfig:\n    updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)\n    switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)\n  runtime: # runtime (docker) specific options\n    gpuRequest: all # same as `--gpus all`\n    labels:\n      - label: bar=baz # same as `--runtime-label 'bar=baz@agent:1'` -> this results in a runtime (docker) container label\n        nodeFilters:\n          - agent:1\n    ulimits:\n      - name: nofile\n        soft: 26677\n        hard: 26677\n
"},{"location":"usage/configfile/#tips","title":"Tips","text":""},{"location":"usage/configfile/#config-file-vs-cli-flags","title":"Config File vs. CLI Flags","text":"

k3d uses Cobra and Viper for CLI and general config handling respectively. This automatically introduces a \u201cconfig option order of priority\u201d (precedence order):

Config Precedence Order

Source: spf13/viper#why-viper

Internal Setting > CLI Flag > Environment Variable > Config File > (k/v store >) Defaults

This means, that you can define e.g. a \u201cbase configuration file\u201d with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments. For example, you use the same config file to create three clusters which only have different names and kubeAPI (--api-port) settings.

"},{"location":"usage/configfile/#references","title":"References","text":""},{"location":"usage/exposing_services/","title":"Exposing Services","text":""},{"location":"usage/exposing_services/#1-via-ingress-recommended","title":"1. via Ingress (recommended)","text":"

In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system.

  1. Create a cluster, mapping the ingress port 80 to localhost:8081

    k3d cluster create --api-port 6550 -p \"8081:80@loadbalancer\" --agents 2

    Good to know

    • --api-port 6550 is not required for the example to work. It\u2019s used to have k3s\u2019s API-Server listening on port 6550 with that port mapped to the host system.
    • the port-mapping construct 8081:80@loadbalancer means: \u201cmap port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer\u201c
      • the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes
        • all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster
  2. Get the kubeconfig file (redundant, as k3d cluster create already merges it into your default kubeconfig file)

    export KUBECONFIG=\"$(k3d kubeconfig write k3s-default)\"

  3. Create a nginx deployment

    kubectl create deployment nginx --image=nginx

  4. Create a ClusterIP service for it

    kubectl create service clusterip nginx --tcp=80:80

  5. Create an ingress object for it by copying the following manifest to a file and applying with kubectl apply -f thatfile.yaml

    Note: k3s deploys traefik as the default ingress controller

    # apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: nginx\n  annotations:\n    ingress.kubernetes.io/ssl-redirect: \"false\"\nspec:\n  rules:\n  - http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: nginx\n            port:\n              number: 80\n
  6. Curl it via localhost

    curl localhost:8081/

"},{"location":"usage/exposing_services/#2-via-nodeport","title":"2. via NodePort","text":"
  1. Create a cluster, mapping the port 30080 from agent-0 to localhost:8082

    k3d cluster create mycluster -p \"8082:30080@agent:0\" --agents 2

    • Note 1: Kubernetes\u2019 default NodePort range is 30000-32767
    • Note 2: You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p \"30000-32767:30000-32767@server:0\" (See this video from @portainer)

      • Warning: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!

      \u2026 (Steps 2 and 3 like above) \u2026

  2. Create a NodePort service for it by copying the following manifest to a file and applying it with kubectl apply -f

    apiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: nginx\n  name: nginx\nspec:\n  ports:\n  - name: 80-80\n    nodePort: 30080\n    port: 80\n    protocol: TCP\n    targetPort: 80\n  selector:\n    app: nginx\n  type: NodePort\n
  3. Curl it via localhost

    curl localhost:8082/

"},{"location":"usage/importing_images/","title":"Importing modes","text":""},{"location":"usage/importing_images/#auto","title":"Auto","text":"

Auto-determine whether to use direct or tools-node.

For remote container runtimes, tools-node is faster due to less network overhead, thus it is automatically selected for remote runtimes.

Otherwise direct is used.

"},{"location":"usage/importing_images/#direct","title":"Direct","text":"

Directly load the given images to the k3s nodes. No separate container is spawned, no intermediate files are written.

"},{"location":"usage/importing_images/#tools-node","title":"Tools Node","text":"

Start a k3d-tools container in the container runtime, copy images to that runtime, then load the images to k3s nodes from there.

"},{"location":"usage/k3s/","title":"K3s Features in k3d","text":"

K3s ships with lots of built-in features and services, some of which may only be used in \u201cnon-normal\u201d ways in k3d due to the fact that K3s is running in containers.

"},{"location":"usage/k3s/#general-k3s-documentation","title":"General: K3s documentation","text":""},{"location":"usage/k3s/#coredns","title":"CoreDNS","text":"

Cluster DNS service

"},{"location":"usage/k3s/#resources","title":"Resources","text":""},{"location":"usage/k3s/#coredns-in-k3d","title":"CoreDNS in k3d","text":"

Basically, CoreDNS works the same in k3d as it does in other clusters. One thing to note though is, that the default forward . /etc/resolv.conf configured in the Corefile doesn\u2019t work the same, as the /etc/resolv.conf file inside the K3s node containers is not the same as the one on your local machine.

"},{"location":"usage/k3s/#modifications","title":"Modifications","text":"

As of k3d v5.x, k3d injects entries to the NodeHosts (basically a hosts file similar to /etc/hosts in Linux, which is managed by K3s) to enable Pods in the cluster to resolve the names of other containers in the same docker network (cluster network) and a special entry called host.k3d.internal which resolves to the IP of the network gateway (can be used to e.g. resolve DNS queries using your local resolver). There\u2019s a PR in progress to make customizations easier (for k3d and for users): https://github.com/k3s-io/k3s/pull/4397

"},{"location":"usage/k3s/#local-path-provisioner","title":"local-path-provisioner","text":"

Dynamically provisioning persistent local storage with Kubernetes

"},{"location":"usage/k3s/#resources_1","title":"Resources","text":""},{"location":"usage/k3s/#local-path-provisioner-in-k3d","title":"local-path-provisioner in k3d","text":"

In k3d, the local paths that the local-path-provisioner uses (default is /var/lib/rancher/k3s/storage) lies inside the container\u2019s filesystem, meaning that by default it\u2019s not mapped somewhere e.g. in your user home directory for you to use. You\u2019d need to map some local directory to that path to easily use the files inside this path: add --volume $HOME/some/directory:/var/lib/rancher/k3s/storage@all to your k3d cluster create command.

"},{"location":"usage/k3s/#traefik","title":"Traefik","text":"

Kubernetes Ingress Controller

"},{"location":"usage/k3s/#resources_2","title":"Resources","text":""},{"location":"usage/k3s/#traefik-in-k3d","title":"Traefik in k3d","text":"

k3d runs K3s in containers, so you\u2019ll need to expose the http/https ports on your host to easily access Ingress resources in your cluster. We have a guide over here explaining how to do this, see

"},{"location":"usage/k3s/#servicelb-klipper-lb","title":"servicelb (klipper-lb)","text":"

Embedded service load balancer in Klipper Allows you to use services with type: LoadBalancer in K3s by creating tiny proxies that use hostPorts

"},{"location":"usage/k3s/#resources_3","title":"Resources","text":""},{"location":"usage/k3s/#servicelb-in-k3d","title":"servicelb in k3d","text":"

klipper-lb creates new pods that proxy traffic from hostPorts to the service ports of type: LoadBalancer. The hostPort in this case is a port in a K3s container, not your local host, so you\u2019d need to add the port-mapping via the --port flag when creating the cluster.

"},{"location":"usage/kubeconfig/","title":"Handling Kubeconfigs","text":"

By default, k3d will update your default kubeconfig with your new cluster\u2019s details and set the current-context to it (can be disabled). To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways.

What is the default kubeconfig?

We determine the path of the used or default kubeconfig in two ways:

  1. Using the KUBECONFIG environment variable, if it specifies exactly one file
  2. Using the default path (e.g. on Linux it\u2019s $HOME/.kube/config)
"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","title":"Getting the kubeconfig for a newly created cluster","text":"
  1. Create a new kubeconfig file after cluster creation

    • k3d kubeconfig write mycluster
      • Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml
      • Tip: Use it: export KUBECONFIG=$(k3d kubeconfig write mycluster)
      • Note 2: alternatively you can use k3d kubeconfig get mycluster > some-file.yaml
  2. Update your default kubeconfig upon cluster creation (DEFAULT)

    • k3d cluster create mycluster --kubeconfig-update-default
      • Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so)
  3. Update your default kubeconfig after cluster creation

    • k3d kubeconfig merge mycluster --kubeconfig-merge-default
      • Note: this won\u2019t switch the current-context (append --kubeconfig-switch-context to do so)
  4. Update a different kubeconfig after cluster creation

    • k3d kubeconfig merge mycluster --output some/other/file.yaml
      • Note: this won\u2019t switch the current-context
    • The file will be created if it doesn\u2019t exist

Switching the current context

None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --kubeconfig-switch-context flag.

"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","title":"Removing cluster details from the kubeconfig","text":"

k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.

"},{"location":"usage/kubeconfig/#handling-multiple-clusters","title":"Handling multiple clusters","text":"

k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all. All kubeconfigs will then be merged into a single file if --kubeconfig-merge-default or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml) will be returned. Note, that with multiple cluster specified, the --kubeconfig-switch-context flag will change the current context to the cluster which was last in the list.

"},{"location":"usage/multiserver/","title":"Creating multi-server clusters","text":"

Important note

For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes. (Read more on etcd quorum on etcd.io) At least 2 cores and 4GiB of RAM are recommended.

"},{"location":"usage/multiserver/#embedded-etcd","title":"Embedded etcd","text":"

Create a cluster with 3 server nodes using k3s\u2019 embedded etcd database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes.

k3d cluster create multiserver --servers 3\n
"},{"location":"usage/multiserver/#adding-server-nodes-to-a-running-cluster","title":"Adding server nodes to a running cluster","text":"

In theory (and also in practice in most cases), this is as easy as executing the following command:

k3d node create newserver --cluster multiserver --role server\n

There\u2019s a trap!

If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the etcd backend.

"},{"location":"usage/registries/","title":"Using Image Registries","text":""},{"location":"usage/registries/#registries-configuration-file","title":"Registries configuration file","text":"

You can add registries by specifying them in a registries.yaml and referencing it at creation time: k3d cluster create mycluster --registry-config \"/home/YOU/my-registries.yaml\".

This file is a regular k3s registries configuration file, and looks like this:

mirrors:\n  \"my.company.registry:5000\":\n    endpoint:\n      - http://my.company.registry:5000\n

In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000.

This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates.

"},{"location":"usage/registries/#registries-configuration-file-embedded-in-k3ds-simpleconfig","title":"Registries Configuration File embedded in k3d\u2019s SimpleConfig","text":"

If you\u2019re using a SimpleConfig file to configure your k3d cluster, you may as well embed the registries.yaml in there directly:

apiVersion: k3d.io/v1alpha5\nkind: Simple\nmetadata:\n  name: test\nservers: 1\nagents: 2\nregistries:\n  create: \n    name: myregistry\n  config: |\n    mirrors:\n      \"my.company.registry\":\n        endpoint:\n          - http://my.company.registry:5000\n

Here, the config for the k3d-managed registry, created by the create: {...} option will be merged with the config specified under config: |.

"},{"location":"usage/registries/#authenticated-registries","title":"Authenticated registries","text":"

When using authenticated registries, we can add the username and password in a configs section in the registries.yaml, like this:

mirrors:\n  my.company.registry:\n    endpoint:\n      - http://my.company.registry\n\nconfigs:\n  my.company.registry:\n    auth:\n      username: aladin\n      password: abracadabra\n
"},{"location":"usage/registries/#secure-registries","title":"Secure registries","text":"

When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry, you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem.

Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file. For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem, the registries.yaml will look like:

mirrors:\n  my.company.registry:\n    endpoint:\n      - https://my.company.registry\n\nconfigs:\n  my.company.registry:\n    tls:\n      # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory.\n      ca_file: \"/etc/ssl/certs/my-company-root.pem\"\n

Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file:

k3d cluster create \\\n  --volume \"${HOME}/.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml\" \\\n  --volume \"${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem\"\n
"},{"location":"usage/registries/#using-a-local-registry","title":"Using a local registry","text":""},{"location":"usage/registries/#preface-referencing-local-registries","title":"Preface: Referencing local registries","text":"

In the next sections, you\u2019re going to create a local registry (i.e. a container image registry running in a container in your docker host). That container will have a name, e.g. mycluster-registry. If you follow the guide closely (or definitely if you use the k3d-managed option), this name will be known to all the hosts (K3s containers) and workloads in your k3d cluster. However, you usually want to push images into that registry from your local machine, which does not know that name by default. Now you have a few options, including the following three:

  1. Use localhost: Since the container will have a port mapped to your local host, you can just directly reference it via e.g. localhost:12345, where 12345 is the mapped port
    • If you later pull the image from the registry, only the repository path (e.g. myrepo/myimage:mytag in mycluster-registry:5000/myrepo/myimage:mytag) matters to find your image in the targeted registry.
  2. Get your machine to know the container name: For this you can use the plain old hosts file (/etc/hosts on Unix systems and C:\\windows\\system32\\drivers\\etc\\hosts on Windows) by adding an entry like the following to the end of the file:

    127.0.0.1 mycluster-registry\n
  3. Use some special resolving magic: Tools like dnsmasq or nss-myhostname (see info box below) and others can setup your local resolver to directly resolve the registry name to 127.0.0.1.

nss-myhostname to resolve *.localhost

Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1. Otherwise, it\u2019s installable using sudo apt install libnss-myhostname.

"},{"location":"usage/registries/#using-k3d-managed-registries","title":"Using k3d-managed registries","text":""},{"location":"usage/registries/#create-a-dedicated-registry-together-with-your-cluster","title":"Create a dedicated registry together with your cluster","text":"
  1. k3d cluster create mycluster --registry-create mycluster-registry: This creates your cluster mycluster together with a registry container called mycluster-registry

    • k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the registries.yaml file)
    • the port, which the registry is listening on will be mapped to a random port on your host system
  2. Check the k3d command output or docker ps -f name=mycluster-registry to find the exposed port

  3. Test your registry
"},{"location":"usage/registries/#create-a-customized-k3d-managed-registry","title":"Create a customized k3d-managed registry","text":"
  1. k3d registry create myregistry.localhost --port 12345 creates a new registry called k3d-myregistry.localhost (could be used with automatic resolution of *.localhost, see next section - also, note the k3d- prefix that k3d adds to all resources it creates)
  2. k3d cluster create newcluster --registry-use k3d-myregistry.localhost:12345 (make sure you use the k3d- prefix here) creates a new cluster set up to use that registry
  3. Test your registry
"},{"location":"usage/registries/#using-your-own-not-k3d-managed-local-registry","title":"Using your own (not k3d-managed) local registry","text":"

We recommend using a k3d-managed registry, as it plays nicely together with k3d clusters, but here\u2019s also a guide to create your own (not k3d-managed) registry, if you need features or customizations, that k3d does not provide:

Using your own (not k3d-managed) local registry

You can start your own local registry it with some docker commands, like:

docker volume create local_registry\ndocker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 12345:5000 registry:2\n

These commands will start your registry container with name and port (on your host) registry.localhost:12345. In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost. And then you can test your local registry.

"},{"location":"usage/registries/#pushing-to-your-local-registry-address","title":"Pushing to your local registry address","text":"

See Preface

The information below has been addressed in the preface for this section.

"},{"location":"usage/registries/#testing-your-registry","title":"Testing your registry","text":"

You should test that you can

We will verify these two things for a local registry (located at k3d-registry.localhost:12345) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this).

Assumptions: In the following test cases, we assume that the registry name k3d-registry.localhost resolves to 127.0.0.1 in your local machine (see section preface for more details) and to the registry container IP for the k3d cluster nodes (K3s containers).

Note: as per the explanation in the preface, you could replace k3d-registry.localhost:12345 with localhost:12345 in the docker tag and docker push commands below (but not in the kubectl part!)

"},{"location":"usage/registries/#nginx-deployment","title":"Nginx Deployment","text":"

First, we can download some image (like nginx) and push it to our local registry with:

docker pull nginx:latest\ndocker tag nginx:latest k3d-registry.localhost:12345/nginx:latest\ndocker push k3d-registry.localhost:12345/nginx:latest\n

Then we can deploy a pod referencing this image to your cluster:

cat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: nginx-test-registry\n  labels:\n    app: nginx-test-registry\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: nginx-test-registry\n  template:\n    metadata:\n      labels:\n        app: nginx-test-registry\n    spec:\n      containers:\n      - name: nginx-test-registry\n        image: k3d-registry.localhost:12345/nginx:latest\n        ports:\n        - containerPort: 80\nEOF\n

Then you should check that the pod is running with kubectl get pods -l \"app=nginx-test-registry\".

"},{"location":"usage/registries/#alpine-pod","title":"Alpine Pod","text":"
  1. Pull the alpine image: docker pull alpine:latest
  2. re-tag it to reference your newly created registry: docker tag alpine:latest k3d-registry.localhost:12345/testimage:local
  3. push it: docker push k3d-registry.localhost:12345/testimage:local
  4. Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: kubectl run --image k3d-registry.localhost:12345/testimage:local testimage --command -- tail -f /dev/null
    • (creates a container that will not do anything but keep on running)
"},{"location":"usage/registries/#creating-a-registry-proxy-pull-through-registry","title":"Creating a registry proxy / pull-through registry","text":"
  1. Create a pull-through registry

    k3d registry create docker-io `# Create a registry named k3d-docker-io` \\\n  -p 5000 `# listening on local host port 5000` \\ \n  --proxy-remote-url https://registry-1.docker.io `# let it mirror the Docker Hub registry` \\\n  -v ~/.local/share/docker-io-registry:/var/lib/registry `# also persist the downloaded images on the device outside the container`\n
  2. Create registry.yml

    mirrors:\n  \"docker.io\":\n    endpoint:\n      - http://k3d-docker-io:5000\n
  3. Create a cluster and using the pull-through cache

    k3d cluster create cluster01 --registry-use k3d-docker-io:5000 --registry-config registry.yml\n
  4. After cluster01 ready, create another cluster with the same registry or rebuild the cluster, it will use the already locally cached images.

    k3d cluster create cluster02 --registry-use k3d-docker-io:5000 --registry-config registry.yml\n
"},{"location":"usage/registries/#creating-a-registry-proxy-pull-through-registry-via-configfile","title":"Creating a registry proxy / pull-through registry via configfile","text":"
  1. Create a config file, e.g. /home/me/test-regcache.yaml

    apiVersion: k3d.io/v1alpha5\nkind: Simple\nmetadata:\n  name: test-regcache\nregistries:\n  create:\n    name: docker-io # name of the registry container\n    proxy:\n      remoteURL: https://registry-1.docker.io # proxy DockerHub\n    volumes:\n      - /tmp/reg:/var/lib/registry # persist data locally in /tmp/reg\n  config: | # tell K3s to use this registry when pulling from DockerHub\n    mirrors:\n      \"docker.io\":\n        endpoint:\n          - http://docker-io:5000\n
  2. Create cluster from config:

    k3d cluster create -c /home/me/test-regcache.yaml\n
"},{"location":"usage/advanced/calico/","title":"Use Calico instead of Flannel","text":"

Network Policies

k3s comes with a controller that enforces network policies by default. You do not need to switch to Calico for network policies to be enforced. See https://github.com/k3s-io/k3s/issues/1308 for more information. The docs below assume you want to switch to Calico\u2019s policy engine, thus setting --disable-network-policy.

"},{"location":"usage/advanced/calico/#1-download-and-modify-the-calico-descriptor","title":"1. Download and modify the Calico descriptor","text":"

You can following the documentation

And then you have to change the ConfigMap calico-config. On the cni_network_config add the entry for allowing IP forwarding

\"container_settings\": {\n    \"allow_ip_forwarding\": true\n}\n

Or you can directly use this calico.yaml manifest

"},{"location":"usage/advanced/calico/#2-create-the-cluster-without-flannel-and-with-calico","title":"2. Create the cluster without flannel and with calico","text":"

On the k3s cluster creation :

So the command of the cluster creation is (when you are at root of the k3d repository)

k3d cluster create \"${clustername}\" \\\n  --k3s-arg '--flannel-backend=none@server:*' \\\n  --k3s-arg '--disable-network-policy' \\\n  --volume \"$(pwd)/docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml\"\n

In this example :

You can add other options, see.

The cluster will start without flannel and with Calico as CNI Plugin.

For watching for the pod(s) deployment

watch \"kubectl get pods -n kube-system\"    \n

You will have something like this at beginning (with the command line kubectl get pods -n kube-system)

NAME                                       READY   STATUS     RESTARTS   AGE\nhelm-install-traefik-pn84f                 0/1     Pending    0          3s\ncalico-node-97rx8                          0/1     Init:0/3   0          3s\nmetrics-server-7566d596c8-hwnqq            0/1     Pending    0          2s\ncalico-kube-controllers-58b656d69f-2z7cn   0/1     Pending    0          2s\nlocal-path-provisioner-6d59f47c7-rmswg     0/1     Pending    0          2s\ncoredns-8655855d6-cxtnr                    0/1     Pending    0          2s\n

And when it finish to start

NAME                                       READY   STATUS      RESTARTS   AGE\nmetrics-server-7566d596c8-hwnqq            1/1     Running     0          56s\ncalico-node-97rx8                          1/1     Running     0          57s\nhelm-install-traefik-pn84f                 0/1     Completed   1          57s\nsvclb-traefik-lmjr5                        2/2     Running     0          28s\ncalico-kube-controllers-58b656d69f-2z7cn   1/1     Running     0          56s\nlocal-path-provisioner-6d59f47c7-rmswg     1/1     Running     0          56s\ntraefik-758cd5fc85-x8p57                   1/1     Running     0          28s\ncoredns-8655855d6-cxtnr                    1/1     Running     0          56s\n

Note :

"},{"location":"usage/advanced/calico/#references","title":"References","text":""},{"location":"usage/advanced/cuda/","title":"Running CUDA workloads","text":"

If you want to run CUDA workloads on the K3s container you need to customize the container. CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime. The K3s container itself also needs to run with this runtime. If you are using Docker you can install the NVIDIA Container Toolkit.

"},{"location":"usage/advanced/cuda/#building-a-customized-k3s-image","title":"Building a customized K3s image","text":"

To get the NVIDIA container runtime in the K3s image you need to build your own K3s image. The native K3s image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.

"},{"location":"usage/advanced/cuda/#dockerfile","title":"Dockerfile","text":"

Dockerfile:

ARG K3S_TAG=\"v1.28.8-k3s1\"\nARG CUDA_TAG=\"12.4.1-base-ubuntu22.04\"\n\nFROM rancher/k3s:$K3S_TAG as k3s\nFROM nvcr.io/nvidia/cuda:$CUDA_TAG\n\n# Install the NVIDIA container toolkit\nRUN apt-get update && apt-get install -y curl \\\n    && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \\\n    && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \\\n      sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \\\n      tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \\\n    && apt-get update && apt-get install -y nvidia-container-toolkit \\\n    && nvidia-ctk runtime configure --runtime=containerd\n\nCOPY --from=k3s / / --exclude=/bin\nCOPY --from=k3s /bin /bin\n\n# Deploy the nvidia driver plugin on startup\nCOPY device-plugin-daemonset.yaml /var/lib/rancher/k3s/server/manifests/nvidia-device-plugin-daemonset.yaml\n\nVOLUME /var/lib/kubelet\nVOLUME /var/lib/rancher/k3s\nVOLUME /var/lib/cni\nVOLUME /var/log\n\nENV PATH=\"$PATH:/bin/aux\"\n\nENTRYPOINT [\"/bin/k3s\"]\nCMD [\"agent\"]\n

This Dockerfile is based on the K3s Dockerfile The following changes are applied:

  1. Change the base images to nvidia/cuda:12.4.1-base-ubuntu22.04 so the NVIDIA Container Toolkit can be installed. The version of cuda:xx.x.x must match the one you\u2019re planning to use.
  2. Add a manifest for the NVIDIA driver plugin for Kubernetes with an added RuntimeClass definition. See k3s documentation.
"},{"location":"usage/advanced/cuda/#the-nvidia-device-plugin","title":"The NVIDIA device plugin","text":"

To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a daemonset and allows you to automatically:

apiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n  name: nvidia\nhandler: nvidia\n---\napiVersion: apps/v1\nkind: DaemonSet\nmetadata:\n  name: nvidia-device-plugin-daemonset\n  namespace: kube-system\nspec:\n  selector:\n    matchLabels:\n      name: nvidia-device-plugin-ds\n  updateStrategy:\n    type: RollingUpdate\n  template:\n    metadata:\n      labels:\n        name: nvidia-device-plugin-ds\n    spec:\n      runtimeClassName: nvidia # Explicitly request the runtime\n      tolerations:\n      - key: nvidia.com/gpu\n        operator: Exists\n        effect: NoSchedule\n      # Mark this pod as a critical add-on; when enabled, the critical add-on\n      # scheduler reserves resources for critical add-on pods so that they can\n      # be rescheduled after a failure.\n      # See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/\n      priorityClassName: \"system-node-critical\"\n      containers:\n      - image: nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2\n        name: nvidia-device-plugin-ctr\n        env:\n          - name: FAIL_ON_INIT_ERROR\n            value: \"false\"\n        securityContext:\n          allowPrivilegeEscalation: false\n          capabilities:\n            drop: [\"ALL\"]\n        volumeMounts:\n        - name: device-plugin\n          mountPath: /var/lib/kubelet/device-plugins\n      volumes:\n      - name: device-plugin\n        hostPath:\n          path: /var/lib/kubelet/device-plugins\n

Two modifications have been made to the original NVIDIA daemonset:

  1. Added RuntimeClass definition to the YAML frontmatter.

    apiVersion: node.k8s.io/v1\nkind: RuntimeClass\nmetadata:\n  name: nvidia\nhandler: nvidia\n
  2. Added runtimeClassName: nvidia to the Pod spec.

Note: you must explicitly add runtimeClassName: nvidia to all your Pod specs to use the GPU. See k3s documentation.

"},{"location":"usage/advanced/cuda/#build-the-k3s-image","title":"Build the K3s image","text":"

To build the custom image we need to build K3s because we need the generated output.

Put the following files in a directory:

The build.sh script is configured using exports & defaults to v1.28.8+k3s1. Please set at least the IMAGE_REGISTRY variable! The script performs the following steps builds the custom K3s image including the nvidia drivers.

build.sh:

#!/bin/bash\n\nset -euxo pipefail\n\nK3S_TAG=${K3S_TAG:=\"v1.28.8-k3s1\"} # replace + with -, if needed\nCUDA_TAG=${CUDA_TAG:=\"12.4.1-base-ubuntu22.04\"}\nIMAGE_REGISTRY=${IMAGE_REGISTRY:=\"MY_REGISTRY\"}\nIMAGE_REPOSITORY=${IMAGE_REPOSITORY:=\"rancher/k3s\"}\nIMAGE_TAG=\"$K3S_TAG-cuda-$CUDA_TAG\"\nIMAGE=${IMAGE:=\"$IMAGE_REGISTRY/$IMAGE_REPOSITORY:$IMAGE_TAG\"}\n\necho \"IMAGE=$IMAGE\"\n\ndocker build \\\n  --build-arg K3S_TAG=$K3S_TAG \\\n  --build-arg CUDA_TAG=$CUDA_TAG \\\n  -t $IMAGE .\ndocker push $IMAGE\necho \"Done!\"\n
"},{"location":"usage/advanced/cuda/#run-and-test-the-custom-image-with-k3d","title":"Run and test the custom image with k3d","text":"

You can use the image with k3d:

k3d cluster create gputest --image=$IMAGE --gpus=1\n

Deploy a test pod:

kubectl apply -f cuda-vector-add.yaml\nkubectl logs cuda-vector-add\n

This should output something like the following:

$ kubectl logs cuda-vector-add\n\n[Vector addition of 50000 elements]\nCopy input data from the host memory to the CUDA device\nCUDA kernel launch with 196 blocks of 256 threads\nCopy output data from the CUDA device to the host memory\nTest PASSED\nDone\n

If the cuda-vector-add pod is stuck in Pending state, probably the device-driver daemonset didn\u2019t get deployed correctly from the auto-deploy manifests. In that case, you can apply it manually via kubectl apply -f device-plugin-daemonset.yaml.

"},{"location":"usage/advanced/cuda/#acknowledgements","title":"Acknowledgements","text":"

Most of the information in this article was obtained from various sources:

"},{"location":"usage/advanced/cuda/#authors","title":"Authors","text":""},{"location":"usage/advanced/podman/","title":"Using Podman instead of Docker","text":"

Podman has an Docker API compatibility layer. k3d uses the Docker API and is compatible with Podman v4 and higher.

Podman support is experimental

k3d is not guaranteed to work with Podman. If you find a bug, do help by filing an issue

Tested with podman version:

Client:       Podman Engine\nVersion:      4.3.1\nAPI Version:  4.3.1\n

"},{"location":"usage/advanced/podman/#using-podman","title":"Using Podman","text":"

Ensure the Podman system socket is available:

sudo systemctl enable --now podman.socket\n# or to start the socket daemonless\n# sudo podman system service --time=0 &\n

Disable timeout for podman service: See the podman-system-service (1) man page for more information.

mkdir -p /etc/containers/containers.conf.d\necho 'service_timeout=0' > /etc/containers/containers.conf.d/timeout.conf\n

To point k3d at the right Docker socket, create a symbolic link:

sudo ln -s /run/podman/podman.sock /var/run/docker.sock\n# or install your system podman-docker if available\nsudo k3d cluster create\n

Alternatively, set DOCKER_HOST when running k3d:

export DOCKER_HOST=unix:///run/podman/podman.sock\nexport DOCKER_SOCK=/run/podman/podman.sock\nsudo --preserve-env=DOCKER_HOST --preserve-env=DOCKER_SOCK k3d cluster create\n
"},{"location":"usage/advanced/podman/#using-rootless-podman","title":"Using rootless Podman","text":"

Ensure the Podman user socket is available:

systemctl --user enable --now podman.socket\n# or podman system service --time=0 &\n

Set DOCKER_HOST when running k3d:

XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)}\nexport DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock\nexport DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock\nk3d cluster create\n
"},{"location":"usage/advanced/podman/#using-cgroup-v2","title":"Using cgroup (v2)","text":"

By default, a non-root user can only get memory controller and pids controller to be delegated.

To run properly we need to enable CPU, CPUSET, and I/O delegation

Make sure you\u2019re running cgroup v2

If /sys/fs/cgroup/cgroup.controllers is present on your system, you are using v2, otherwise you are using v1.

mkdir -p /etc/systemd/system/user@.service.d\ncat > /etc/systemd/system/user@.service.d/delegate.conf <<EOF\n[Service]\nDelegate=cpu cpuset io memory pids\nEOF\nsystemctl daemon-reload\n

Reference: https://rootlesscontaine.rs/getting-started/common/cgroup2/#enabling-cpu-cpuset-and-io-delegation

"},{"location":"usage/advanced/podman/#using-remote-podman","title":"Using remote Podman","text":"

Start Podman on the remote host, and then set DOCKER_HOST when running k3d:

export DOCKER_HOST=ssh://username@hostname\nexport DOCKER_SOCK=/run/user/1000/podman/podman.sock\nk3d cluster create\n
"},{"location":"usage/advanced/podman/#macos","title":"macOS","text":"

Initialize a podman machine if not done already

podman machine init\n

Or start an already existing podman machine

podman machine start\n

Grab connection details

podman system connection ls\nName                         URI                                                         Identity                                      Default\npodman-machine-default       ssh://core@localhost:53685/run/user/501/podman/podman.sock  /Users/myusername/.ssh/podman-machine-default  true\npodman-machine-default-root  ssh://root@localhost:53685/run/podman/podman.sock           /Users/myusername/.ssh/podman-machine-default  false\n

Edit your OpenSSH config file to specify the IdentityFile

vim ~/.ssh/config\n\nHost localhost\n    IdentityFile /Users/myusername/.ssh/podman-machine-default\n
"},{"location":"usage/advanced/podman/#rootless-mode","title":"Rootless mode","text":"

Delegate the cpuset cgroup controller to the user\u2019s systemd slice, export the docker environment variables referenced above for the non-root connection, and create the cluster:

podman machine ssh bash -e <<EOF\n  printf '[Service]\\nDelegate=cpuset\\n' | sudo tee /etc/systemd/system/user@.service.d/k3d.conf\n  sudo systemctl daemon-reload\n  sudo systemctl restart \"user@\\${UID}\"\nEOF\n\nexport DOCKER_HOST=ssh://core@localhost:53685\nexport DOCKER_SOCKET=/run/user/501/podman/podman.sock\nk3d cluster create --k3s-arg '--kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*'\n
"},{"location":"usage/advanced/podman/#rootful-mode","title":"Rootful mode","text":"

Export the docker environment variables referenced above for the root connection and create the cluster:

export DOCKER_HOST=ssh://root@localhost:53685\nexport DOCKER_SOCK=/run/podman/podman.sock\nk3d cluster create\n
"},{"location":"usage/advanced/podman/#podman-network","title":"Podman network","text":"

The default podman network has dns disabled. To allow k3d cluster nodes to communicate with dns a new network must be created.

podman network create k3d\npodman network inspect k3d -f '{{ .DNSEnabled }}'\ntrue\n

"},{"location":"usage/advanced/podman/#creating-local-registries","title":"Creating local registries","text":"

Because Podman does not have a default \u201cbridge\u201d network, you have to specify a network using the --default-network flag when creating a local registry:

k3d registry create --default-network podman mycluster-registry\n

To use this registry with a cluster, pass the --registry-use flag:

k3d cluster create --registry-use mycluster-registry mycluster\n

Incompatibility with --registry-create

Because --registry-create assumes the default network to be \u201cbridge\u201d, avoid --registry-create when using Podman. Instead, always create a registry before creating a cluster.

Missing cpuset cgroup controller

If you experince an error regarding missing cpuset cgroup controller, ensure the user unit xdg-document-portal.service is disabled by running systemctl --user stop xdg-document-portal.service. See this issue

"},{"location":"usage/commands/k3d/","title":"K3d","text":""},{"location":"usage/commands/k3d/#k3d","title":"k3d","text":"

https://k3d.io/ -> Run k3s in Docker!

"},{"location":"usage/commands/k3d/#synopsis","title":"Synopsis","text":"

https://k3d.io/ k3d is a wrapper CLI that helps you to easily create k3s clusters inside docker. Nodes of a k3d cluster are docker containers running a k3s image. All Nodes of a k3d cluster are part of the same docker network.

k3d [flags]\n
"},{"location":"usage/commands/k3d/#options","title":"Options","text":"
  -h, --help         help for k3d\n      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n      --version      Show k3d and default k3s version\n
"},{"location":"usage/commands/k3d/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster/","title":"K3d cluster","text":""},{"location":"usage/commands/k3d_cluster/#k3d-cluster","title":"k3d cluster","text":"

Manage cluster(s)

"},{"location":"usage/commands/k3d_cluster/#synopsis","title":"Synopsis","text":"

Manage cluster(s)

k3d cluster [flags]\n
"},{"location":"usage/commands/k3d_cluster/#options","title":"Options","text":"
  -h, --help   help for cluster\n
"},{"location":"usage/commands/k3d_cluster/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_create/","title":"K3d cluster create","text":""},{"location":"usage/commands/k3d_cluster_create/#k3d-cluster-create","title":"k3d cluster create","text":"

Create a new cluster

"},{"location":"usage/commands/k3d_cluster_create/#synopsis","title":"Synopsis","text":"

Create a new k3s cluster with containerized nodes (k3s in docker). Every cluster will consist of one or more containers:

k3d cluster create NAME [flags]\n
"},{"location":"usage/commands/k3d_cluster_create/#options","title":"Options","text":"
  -a, --agents int                                                     Specify how many agents you want to create\n      --agents-memory string                                           Memory limit imposed on the agents nodes [From docker]\n      --api-port [HOST:]HOSTPORT                                       Specify the Kubernetes API server port exposed on the LoadBalancer (Format: [HOST:]HOSTPORT)\n                                                                        - Example: `k3d cluster create --servers 3 --api-port 0.0.0.0:6550`\n  -c, --config string                                                  Path of a config file to use\n  -e, --env KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]                   Add environment variables to nodes (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 -e \"HTTP_PROXY=my.proxy.com@server:0\" -e \"SOME_KEY=SOME_VAL@server:0\"`\n      --gpus string                                                    GPU devices to add to the cluster node containers ('all' to pass all GPUs) [From docker]\n  -h, --help                                                           help for create\n      --host-alias ip:host[,host,...]                                  Add ip:host[,host,...] mappings\n      --host-pid-mode                                                  Enable host pid mode of server(s) and agent(s)\n  -i, --image string                                                   Specify k3s image that you want to use for the nodes\n      --k3s-arg ARG@NODEFILTER[;@NODEFILTER]                           Additional args passed to k3s command (Format: ARG@NODEFILTER[;@NODEFILTER])\n                                                                        - Example: `k3d cluster create --k3s-arg \"--disable=traefik@server:0\"`\n      --k3s-node-label KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]        Add label to k3s node (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 --k3s-node-label \"my.label@agent:0,1\" --k3s-node-label \"other.label=somevalue@server:0\"`\n      --kubeconfig-switch-context                                      Directly switch the default kubeconfig's current-context to the new cluster's context (requires --kubeconfig-update-default) (default true)\n      --kubeconfig-update-default                                      Directly update the default kubeconfig with the new cluster's context (default true)\n      --lb-config-override strings                                     Use dotted YAML path syntax to override nginx loadbalancer settings\n      --network string                                                 Join an existing network\n      --no-image-volume                                                Disable the creation of a volume for importing images\n      --no-lb                                                          Disable the creation of a LoadBalancer in front of the server nodes\n      --no-rollback                                                    Disable the automatic rollback actions, if anything goes wrong\n  -p, --port [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]   Map ports from the node containers (via the serverlb) to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])\n                                                                        - Example: `k3d cluster create --agents 2 -p 8080:80@agent:0 -p 8081@agent:1`\n      --registry-config string                                         Specify path to an extra registries.yaml file\n      --registry-create NAME[:HOST][:HOSTPORT]                         Create a k3d-managed registry and connect it to the cluster (Format: NAME[:HOST][:HOSTPORT]\n                                                                        - Example: `k3d cluster create --registry-create mycluster-registry:0.0.0.0:5432`\n      --registry-use stringArray                                       Connect to one or more k3d-managed registries running locally\n      --runtime-label KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]         Add label to container runtime (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 --runtime-label \"my.label@agent:0,1\" --runtime-label \"other.label=somevalue@server:0\"`\n      --runtime-ulimit NAME[=SOFT]:[HARD]                              Add ulimit to container runtime (Format: NAME[=SOFT]:[HARD]\n                                                                        - Example: `k3d cluster create --agents 2 --runtime-ulimit \"nofile=1024:1024\" --runtime-ulimit \"noproc=1024:1024\"`\n  -s, --servers int                                                    Specify how many servers you want to create\n      --servers-memory string                                          Memory limit imposed on the server nodes [From docker]\n      --subnet 172.28.0.0/16                                           [Experimental: IPAM] Define a subnet for the newly created container network (Example: 172.28.0.0/16)\n      --timeout duration                                               Rollback changes if cluster couldn't be created in specified duration.\n      --token string                                                   Specify a cluster token. By default, we generate one.\n  -v, --volume [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]              Mount volumes into the nodes (Format: [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]\n                                                                        - Example: `k3d cluster create --agents 2 -v /my/path@agent:0,1 -v /tmp/test:/tmp/other@server:0`\n      --wait                                                           Wait for the server(s) to be ready before returning. Use '--timeout DURATION' to not wait forever. (default true)\n
"},{"location":"usage/commands/k3d_cluster_create/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_create/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_delete/","title":"K3d cluster delete","text":""},{"location":"usage/commands/k3d_cluster_delete/#k3d-cluster-delete","title":"k3d cluster delete","text":"

Delete cluster(s).

"},{"location":"usage/commands/k3d_cluster_delete/#synopsis","title":"Synopsis","text":"

Delete cluster(s).

k3d cluster delete [NAME [NAME ...] | --all] [flags]\n
"},{"location":"usage/commands/k3d_cluster_delete/#options","title":"Options","text":"
  -a, --all             Delete all existing clusters\n  -c, --config string   Path of a config file to use\n  -h, --help            help for delete\n
"},{"location":"usage/commands/k3d_cluster_delete/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_delete/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_edit/","title":"K3d cluster edit","text":""},{"location":"usage/commands/k3d_cluster_edit/#k3d-cluster-edit","title":"k3d cluster edit","text":"

[EXPERIMENTAL] Edit cluster(s).

"},{"location":"usage/commands/k3d_cluster_edit/#synopsis","title":"Synopsis","text":"

[EXPERIMENTAL] Edit cluster(s).

k3d cluster edit CLUSTER [flags]\n
"},{"location":"usage/commands/k3d_cluster_edit/#options","title":"Options","text":"
  -h, --help                                                               help for edit\n      --port-add [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]   [EXPERIMENTAL] Map ports from the node containers (via the serverlb) to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])\n                                                                            - Example: `k3d node edit k3d-mycluster-serverlb --port-add 8080:80`\n
"},{"location":"usage/commands/k3d_cluster_edit/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_edit/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_list/","title":"K3d cluster list","text":""},{"location":"usage/commands/k3d_cluster_list/#k3d-cluster-list","title":"k3d cluster list","text":"

List cluster(s)

"},{"location":"usage/commands/k3d_cluster_list/#synopsis","title":"Synopsis","text":"

List cluster(s).

k3d cluster list [NAME [NAME...]] [flags]\n
"},{"location":"usage/commands/k3d_cluster_list/#options","title":"Options","text":"
  -h, --help            help for list\n      --no-headers      Disable headers\n  -o, --output string   Output format. One of: json|yaml\n      --token           Print k3s cluster token\n
"},{"location":"usage/commands/k3d_cluster_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_list/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_start/","title":"K3d cluster start","text":""},{"location":"usage/commands/k3d_cluster_start/#k3d-cluster-start","title":"k3d cluster start","text":"

Start existing k3d cluster(s)

"},{"location":"usage/commands/k3d_cluster_start/#synopsis","title":"Synopsis","text":"

Start existing k3d cluster(s)

k3d cluster start [NAME [NAME...] | --all] [flags]\n
"},{"location":"usage/commands/k3d_cluster_start/#options","title":"Options","text":"
  -a, --all                Start all existing clusters\n  -h, --help               help for start\n      --timeout duration   Maximum waiting time for '--wait' before canceling/returning.\n      --wait               Wait for the server(s) (and loadbalancer) to be ready before returning. (default true)\n
"},{"location":"usage/commands/k3d_cluster_start/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_start/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_cluster_stop/","title":"K3d cluster stop","text":""},{"location":"usage/commands/k3d_cluster_stop/#k3d-cluster-stop","title":"k3d cluster stop","text":"

Stop existing k3d cluster(s)

"},{"location":"usage/commands/k3d_cluster_stop/#synopsis","title":"Synopsis","text":"

Stop existing k3d cluster(s).

k3d cluster stop [NAME [NAME...] | --all] [flags]\n
"},{"location":"usage/commands/k3d_cluster_stop/#options","title":"Options","text":"
  -a, --all    Stop all existing clusters\n  -h, --help   help for stop\n
"},{"location":"usage/commands/k3d_cluster_stop/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_cluster_stop/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_completion/","title":"K3d completion","text":""},{"location":"usage/commands/k3d_completion/#k3d-completion","title":"k3d completion","text":"

Generate completion scripts for [bash, zsh, fish, powershell | psh]

"},{"location":"usage/commands/k3d_completion/#synopsis","title":"Synopsis","text":"

To load completions:

Bash:

$ source <(k3d completion bash)\n\n# To load completions for each session, execute once:\n# Linux:\n$ k3d completion bash > /etc/bash_completion.d/k3d\n# macOS:\n$ k3d completion bash > /usr/local/etc/bash_completion.d/k3d\n

Zsh:

# If shell completion is not already enabled in your environment,\n# you will need to enable it.  You can execute the following once:\n\n$ echo \"autoload -U compinit; compinit\" >> ~/.zshrc\n\n# To load completions for each session, execute once:\n$ k3d completion zsh > \"${fpath[1]}/_k3d\"\n\n# You will need to start a new shell for this setup to take effect.\n

fish:

$ k3d completion fish | source\n\n# To load completions for each session, execute once:\n$ k3d completion fish > ~/.config/fish/completions/k3d.fish\n

PowerShell:

PS> k3d completion powershell | Out-String | Invoke-Expression\n\n# To load completions for every new session, run:\nPS> k3d completion powershell > k3d.ps1\n# and source this file from your PowerShell profile.\n
k3d completion SHELL\n
"},{"location":"usage/commands/k3d_completion/#options","title":"Options","text":"
  -h, --help   help for completion\n
"},{"location":"usage/commands/k3d_completion/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_completion/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_config/","title":"K3d config","text":""},{"location":"usage/commands/k3d_config/#k3d-config","title":"k3d config","text":"

Work with config file(s)

"},{"location":"usage/commands/k3d_config/#synopsis","title":"Synopsis","text":"

Work with config file(s)

k3d config [flags]\n
"},{"location":"usage/commands/k3d_config/#options","title":"Options","text":"
  -h, --help   help for config\n
"},{"location":"usage/commands/k3d_config/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_config/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_config_init/","title":"K3d config init","text":""},{"location":"usage/commands/k3d_config_init/#k3d-config-init","title":"k3d config init","text":"
k3d config init [flags]\n
"},{"location":"usage/commands/k3d_config_init/#options","title":"Options","text":"
  -f, --force           Force overwrite of target file\n  -h, --help            help for init\n  -o, --output string   Write a default k3d config (default \"k3d-default.yaml\")\n
"},{"location":"usage/commands/k3d_config_init/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_config_init/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_config_migrate/","title":"K3d config migrate","text":""},{"location":"usage/commands/k3d_config_migrate/#k3d-config-migrate","title":"k3d config migrate","text":"
k3d config migrate INPUT [OUTPUT] [flags]\n
"},{"location":"usage/commands/k3d_config_migrate/#options","title":"Options","text":"
  -h, --help   help for migrate\n
"},{"location":"usage/commands/k3d_config_migrate/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_config_migrate/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_image/","title":"K3d image","text":""},{"location":"usage/commands/k3d_image/#k3d-image","title":"k3d image","text":"

Handle container images.

"},{"location":"usage/commands/k3d_image/#synopsis","title":"Synopsis","text":"

Handle container images.

k3d image [flags]\n
"},{"location":"usage/commands/k3d_image/#options","title":"Options","text":"
  -h, --help   help for image\n
"},{"location":"usage/commands/k3d_image/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_image/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_image_import/","title":"K3d image import","text":""},{"location":"usage/commands/k3d_image_import/#k3d-image-import","title":"k3d image import","text":"

Import image(s) from docker into k3d cluster(s).

"},{"location":"usage/commands/k3d_image_import/#synopsis","title":"Synopsis","text":"

Import image(s) from docker into k3d cluster(s).

If an IMAGE starts with the prefix \u2018docker.io/\u2019, then this prefix is stripped internally. That is, \u2018docker.io/k3d-io/k3d-tools:latest\u2019 is treated as \u2018k3d-io/k3d-tools:latest\u2019.

If an IMAGE starts with the prefix \u2018library/\u2019 (or \u2018docker.io/library/\u2019), then this prefix is stripped internally. That is, \u2018library/busybox:latest\u2019 (or \u2018docker.io/library/busybox:latest\u2019) are treated as \u2018busybox:latest\u2019.

If an IMAGE does not have a version tag, then \u2018:latest\u2019 is assumed. That is, \u2018k3d-io/k3d-tools\u2019 is treated as \u2018k3d-io/k3d-tools:latest\u2019.

A file ARCHIVE always takes precedence. So if a file \u2018./k3d-io/k3d-tools\u2019 exists, k3d will try to import it instead of the IMAGE of the same name.

k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]\n
"},{"location":"usage/commands/k3d_image_import/#options","title":"Options","text":"
  -c, --cluster stringArray   Select clusters to load the image to. (default [k3s-default])\n  -h, --help                  help for import\n  -k, --keep-tarball          Do not delete the tarball containing the saved images from the shared volume\n  -t, --keep-tools            Do not delete the tools node after import\n  -m, --mode string           Which method to use to import images into the cluster [auto, direct, tools]. See https://k3d.io/stable/usage/importing_images/ (default \"tools-node\")\n
"},{"location":"usage/commands/k3d_image_import/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_image_import/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_kubeconfig/","title":"K3d kubeconfig","text":""},{"location":"usage/commands/k3d_kubeconfig/#k3d-kubeconfig","title":"k3d kubeconfig","text":"

Manage kubeconfig(s)

"},{"location":"usage/commands/k3d_kubeconfig/#synopsis","title":"Synopsis","text":"

Manage kubeconfig(s)

k3d kubeconfig [flags]\n
"},{"location":"usage/commands/k3d_kubeconfig/#options","title":"Options","text":"
  -h, --help   help for kubeconfig\n
"},{"location":"usage/commands/k3d_kubeconfig/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_kubeconfig/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_kubeconfig_get/","title":"K3d kubeconfig get","text":""},{"location":"usage/commands/k3d_kubeconfig_get/#k3d-kubeconfig-get","title":"k3d kubeconfig get","text":"

Print kubeconfig(s) from cluster(s).

"},{"location":"usage/commands/k3d_kubeconfig_get/#synopsis","title":"Synopsis","text":"

Print kubeconfig(s) from cluster(s).

k3d kubeconfig get [CLUSTER [CLUSTER [...]] | --all] [flags]\n
"},{"location":"usage/commands/k3d_kubeconfig_get/#options","title":"Options","text":"
  -a, --all    Output kubeconfigs from all existing clusters\n  -h, --help   help for get\n
"},{"location":"usage/commands/k3d_kubeconfig_get/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_kubeconfig_get/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_kubeconfig_merge/","title":"K3d kubeconfig merge","text":""},{"location":"usage/commands/k3d_kubeconfig_merge/#k3d-kubeconfig-merge","title":"k3d kubeconfig merge","text":"

Write/Merge kubeconfig(s) from cluster(s) into new or existing kubeconfig/file.

"},{"location":"usage/commands/k3d_kubeconfig_merge/#synopsis","title":"Synopsis","text":"

Write/Merge kubeconfig(s) from cluster(s) into new or existing kubeconfig/file.

k3d kubeconfig merge [CLUSTER [CLUSTER [...]] | --all] [flags]\n
"},{"location":"usage/commands/k3d_kubeconfig_merge/#options","title":"Options","text":"
  -a, --all                         Get kubeconfigs from all existing clusters\n  -h, --help                        help for merge\n  -d, --kubeconfig-merge-default    Merge into the default kubeconfig ($KUBECONFIG or /home/thklein/.kube/config)\n  -s, --kubeconfig-switch-context   Switch to new context (default true)\n  -o, --output string               Define output [ - | FILE ] (default from $KUBECONFIG or /home/thklein/.kube/config\n      --overwrite                   [Careful!] Overwrite existing file, ignoring its contents\n  -u, --update                      Update conflicting fields in existing kubeconfig (default true)\n
"},{"location":"usage/commands/k3d_kubeconfig_merge/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_kubeconfig_merge/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node/","title":"K3d node","text":""},{"location":"usage/commands/k3d_node/#k3d-node","title":"k3d node","text":"

Manage node(s)

"},{"location":"usage/commands/k3d_node/#synopsis","title":"Synopsis","text":"

Manage node(s)

k3d node [flags]\n
"},{"location":"usage/commands/k3d_node/#options","title":"Options","text":"
  -h, --help   help for node\n
"},{"location":"usage/commands/k3d_node/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_create/","title":"K3d node create","text":""},{"location":"usage/commands/k3d_node_create/#k3d-node-create","title":"k3d node create","text":"

Create a new k3s node in docker

"},{"location":"usage/commands/k3d_node_create/#synopsis","title":"Synopsis","text":"

Create a new containerized k3s node (k3s in docker).

k3d node create NAME [flags]\n
"},{"location":"usage/commands/k3d_node_create/#options","title":"Options","text":"
  -c, --cluster string           Cluster URL or k3d cluster name to connect to. (default \"k3s-default\")\n  -h, --help                     help for create\n  -i, --image string             Specify k3s image used for the node(s) (default: copied from existing node)\n      --k3s-arg stringArray      Additional args passed to k3d command\n      --k3s-node-label strings   Specify k3s node labels in format \"foo=bar\"\n      --memory string            Memory limit imposed on the node [From docker]\n  -n, --network strings          Add node to (another) runtime network\n      --replicas int             Number of replicas of this node specification. (default 1)\n      --role string              Specify node role [server, agent] (default \"agent\")\n      --runtime-label strings    Specify container runtime labels in format \"foo=bar\"\n      --runtime-ulimit strings   Specify container runtime ulimit in format \"ulimit=soft:hard\"\n      --timeout duration         Maximum waiting time for '--wait' before canceling/returning.\n  -t, --token string             Override cluster token (required when connecting to an external cluster)\n      --wait                     Wait for the node(s) to be ready before returning. (default true)\n
"},{"location":"usage/commands/k3d_node_create/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_create/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_delete/","title":"K3d node delete","text":""},{"location":"usage/commands/k3d_node_delete/#k3d-node-delete","title":"k3d node delete","text":"

Delete node(s).

"},{"location":"usage/commands/k3d_node_delete/#synopsis","title":"Synopsis","text":"

Delete node(s).

k3d node delete (NAME | --all) [flags]\n
"},{"location":"usage/commands/k3d_node_delete/#options","title":"Options","text":"
  -a, --all          Delete all existing nodes\n  -h, --help         help for delete\n  -r, --registries   Also delete registries\n
"},{"location":"usage/commands/k3d_node_delete/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_delete/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_edit/","title":"K3d node edit","text":""},{"location":"usage/commands/k3d_node_edit/#k3d-node-edit","title":"k3d node edit","text":"

[EXPERIMENTAL] Edit node(s).

"},{"location":"usage/commands/k3d_node_edit/#synopsis","title":"Synopsis","text":"

[EXPERIMENTAL] Edit node(s).

k3d node edit NODE [flags]\n
"},{"location":"usage/commands/k3d_node_edit/#options","title":"Options","text":"
  -h, --help                                                               help for edit\n      --port-add [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]   [EXPERIMENTAL] (serverlb only!) Map ports from the node container to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])\n                                                                            - Example: `k3d node edit k3d-mycluster-serverlb --port-add 8080:80`\n
"},{"location":"usage/commands/k3d_node_edit/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_edit/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_list/","title":"K3d node list","text":""},{"location":"usage/commands/k3d_node_list/#k3d-node-list","title":"k3d node list","text":"

List node(s)

"},{"location":"usage/commands/k3d_node_list/#synopsis","title":"Synopsis","text":"

List node(s).

k3d node list [NODE [NODE...]] [flags]\n
"},{"location":"usage/commands/k3d_node_list/#options","title":"Options","text":"
  -h, --help            help for list\n      --no-headers      Disable headers\n  -o, --output string   Output format. One of: json|yaml\n
"},{"location":"usage/commands/k3d_node_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_list/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_start/","title":"K3d node start","text":""},{"location":"usage/commands/k3d_node_start/#k3d-node-start","title":"k3d node start","text":"

Start an existing k3d node

"},{"location":"usage/commands/k3d_node_start/#synopsis","title":"Synopsis","text":"

Start an existing k3d node.

k3d node start NODE [flags]\n
"},{"location":"usage/commands/k3d_node_start/#options","title":"Options","text":"
  -h, --help   help for start\n
"},{"location":"usage/commands/k3d_node_start/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_start/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_node_stop/","title":"K3d node stop","text":""},{"location":"usage/commands/k3d_node_stop/#k3d-node-stop","title":"k3d node stop","text":"

Stop an existing k3d node

"},{"location":"usage/commands/k3d_node_stop/#synopsis","title":"Synopsis","text":"

Stop an existing k3d node.

k3d node stop NAME [flags]\n
"},{"location":"usage/commands/k3d_node_stop/#options","title":"Options","text":"
  -h, --help   help for stop\n
"},{"location":"usage/commands/k3d_node_stop/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_node_stop/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry/","title":"K3d registry","text":""},{"location":"usage/commands/k3d_registry/#k3d-registry","title":"k3d registry","text":"

Manage registry/registries

"},{"location":"usage/commands/k3d_registry/#synopsis","title":"Synopsis","text":"

Manage registry/registries

k3d registry [flags]\n
"},{"location":"usage/commands/k3d_registry/#options","title":"Options","text":"
  -h, --help   help for registry\n
"},{"location":"usage/commands/k3d_registry/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry_create/","title":"K3d registry create","text":""},{"location":"usage/commands/k3d_registry_create/#k3d-registry-create","title":"k3d registry create","text":"

Create a new registry

"},{"location":"usage/commands/k3d_registry_create/#synopsis","title":"Synopsis","text":"

Create a new registry.

k3d registry create NAME [flags]\n
"},{"location":"usage/commands/k3d_registry_create/#options","title":"Options","text":"
      --default-network string    Specify the network connected to the registry (default \"bridge\")\n  -h, --help                      help for create\n  -i, --image string              Specify image used for the registry (default \"docker.io/library/registry:2\")\n      --no-help                   Disable the help text (How-To use the registry)\n  -p, --port [HOST:]HOSTPORT      Select which port the registry should be listening on on your machine (localhost) (Format: [HOST:]HOSTPORT)\n                                   - Example: `k3d registry create --port 0.0.0.0:5111` (default \"random\")\n      --proxy-password string     Specify the password of the proxied remote registry\n      --proxy-remote-url string   Specify the url of the proxied remote registry\n      --proxy-username string     Specify the username of the proxied remote registry\n  -v, --volume [SOURCE:]DEST      Mount volumes into the registry node (Format: [SOURCE:]DEST\n
"},{"location":"usage/commands/k3d_registry_create/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry_create/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry_delete/","title":"K3d registry delete","text":""},{"location":"usage/commands/k3d_registry_delete/#k3d-registry-delete","title":"k3d registry delete","text":"

Delete registry/registries.

"},{"location":"usage/commands/k3d_registry_delete/#synopsis","title":"Synopsis","text":"

Delete registry/registries.

k3d registry delete (NAME | --all) [flags]\n
"},{"location":"usage/commands/k3d_registry_delete/#options","title":"Options","text":"
  -a, --all    Delete all existing registries\n  -h, --help   help for delete\n
"},{"location":"usage/commands/k3d_registry_delete/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry_delete/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_registry_list/","title":"K3d registry list","text":""},{"location":"usage/commands/k3d_registry_list/#k3d-registry-list","title":"k3d registry list","text":"

List registries

"},{"location":"usage/commands/k3d_registry_list/#synopsis","title":"Synopsis","text":"

List registries.

k3d registry list [NAME [NAME...]] [flags]\n
"},{"location":"usage/commands/k3d_registry_list/#options","title":"Options","text":"
  -h, --help            help for list\n      --no-headers      Disable headers\n  -o, --output string   Output format. One of: json|yaml\n
"},{"location":"usage/commands/k3d_registry_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_registry_list/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_version/","title":"K3d version","text":""},{"location":"usage/commands/k3d_version/#k3d-version","title":"k3d version","text":"

Show k3d and default k3s version

"},{"location":"usage/commands/k3d_version/#synopsis","title":"Synopsis","text":"

Show k3d and default k3s version

k3d version [flags]\n
"},{"location":"usage/commands/k3d_version/#options","title":"Options","text":"
  -h, --help            help for version\n  -o, --output string   This will return version information as a different format.  Only json is supported\n
"},{"location":"usage/commands/k3d_version/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_version/#see-also","title":"SEE ALSO","text":""},{"location":"usage/commands/k3d_version_list/","title":"K3d version list","text":""},{"location":"usage/commands/k3d_version_list/#k3d-version-list","title":"k3d version list","text":"

List k3d/K3s versions. Component can be one of \u2018k3d\u2019, \u2018k3s\u2019, \u2018k3d-proxy\u2019, \u2018k3d-tools\u2019.

k3d version list COMPONENT [flags]\n
"},{"location":"usage/commands/k3d_version_list/#options","title":"Options","text":"
  -e, --exclude string   Exclude Regexp (default excludes pre-releases and arch-specific tags) (default \".+(rc|engine|alpha|beta|dev|test|arm|arm64|amd64).*\")\n  -f, --format string    [DEPRECATED] Use --output instead (default \"raw\")\n  -h, --help             help for list\n  -i, --include string   Include Regexp (default includes everything (default \".*\")\n  -l, --limit int        Limit number of tags in output (0 = unlimited)\n  -o, --output string    Output Format [raw | repo] (default \"raw\")\n  -s, --sort string      Sort Mode (asc | desc | off) (default \"desc\")\n
"},{"location":"usage/commands/k3d_version_list/#options-inherited-from-parent-commands","title":"Options inherited from parent commands","text":"
      --timestamps   Enable Log timestamps\n      --trace        Enable super verbose output (trace logging)\n      --verbose      Enable verbose output (debug logging)\n
"},{"location":"usage/commands/k3d_version_list/#see-also","title":"SEE ALSO","text":""}]} \ No newline at end of file diff --git a/v5.6.3/sitemap.xml b/v5.6.3/sitemap.xml index f70a39dd..29cee38c 100644 --- a/v5.6.3/sitemap.xml +++ b/v5.6.3/sitemap.xml @@ -2,242 +2,242 @@ https://k3d.io/v5.6.3/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/design/concepts/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/design/defaults/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/design/networking/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/design/project/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/faq/compatibility/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/faq/faq/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/configfile/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/exposing_services/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/importing_images/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/k3s/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/kubeconfig/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/multiserver/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/registries/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/advanced/calico/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/advanced/cuda/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/advanced/podman/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster_create/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster_delete/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster_edit/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster_list/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster_start/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_cluster_stop/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_completion/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_config/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_config_init/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_config_migrate/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_image/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_image_import/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_kubeconfig/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_kubeconfig_get/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_kubeconfig_merge/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node_create/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node_delete/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node_edit/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node_list/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node_start/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_node_stop/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_registry/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_registry_create/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_registry_delete/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_registry_list/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_version/ - 2024-04-10 + 2024-04-15 daily https://k3d.io/v5.6.3/usage/commands/k3d_version_list/ - 2024-04-10 + 2024-04-15 daily \ No newline at end of file diff --git a/v5.6.3/sitemap.xml.gz b/v5.6.3/sitemap.xml.gz index a90b9eab7bde6722a51d568c17849dc6fa403043..996643005eb9fe9f63f322c89925acfd32e45566 100644 GIT binary patch literal 518 zcmV+h0{Q(PiwFp{w;W~y|8r?{Wo=<_E_iKh0M*&eZrd;n0N{I{BJe#{+O)$qIPPsv zu$>K6q8%>!t0X7Q+fV*MkV6OTG>`^@#IhiK5Gaarn%56&HfM+ly*pHo^{(20Q(mJR z4%M%pKek`1*Y=?qqe+lArL#+i>X>u*F2v<>sS{?fGNjtaK_qo7H%Y9GVpo5w+J{ZK zrU8~;^VJxy+G&(jhB@tZjb5Bz>M!+PG-7&o-&Lk;y>6d&PtV)k^Y-zjbXnW(x~Zdd z1N0HzTP@LC8*$gN7{m?n?;H&Y2Pc$w3PMgqc^Y~-8nQeG`QqcL#1v6ri2gUIOikCz z_Y!$F$c%4jFkdMGrw=jKnB)M{%vtGlqN;N}4m}#6>4T5pQ(3WHf_O#+M5hsL@G(zL zwB??t^0cQ48{c4wM4CQZb~T(L5F$-rgaK2INHy}LKFdho`U|!jfYu&W|9}y|Fii>|I@_IUkP&i2M80& I`}ZFJ093^LbN~PV literal 518 zcmV+h0{Q(PiwFq(juvJD|8r?{Wo=<_E_iKh0M*&eZsQ;j0N{I{BJw@PZMv&&<+!&# z!S30jFm|Tl#|4};Z@>6QQ4U?Hr`3$5C^1Ip6B>rWsb4;=*_1?=hsSO8xV^7(m$mJtn>t9> zLl@wq(Gtz27B?-6L0l2v=V*x7JE6Q&Fr-M7r=gRhAm{)8YA)HVF(P>0G_>d+i z+I&w`dE8Njjc>3-B2AyoyBbar2!SRrK#ws6q#AirpJXmwv{2HZ^3;)|mc)sJ*LhbV zzT@SG^o6Z5V@y;R_E*wY1&|3Cfi)Ol1qO|Y5!dA8&xjk747|L4;K0piw7kyHI~G{F z<=z+279Ed_uxKszfyWjxpO?5e+vS|cfJ^;E^$%>@1FzX_Ed&%8W|O6?QG>z)2>(+1?~D)>%xf2CwE=NZ1UGt%p@ /etc/nsswitch.conf - -RUN chmod 1777 /tmp +FROM rancher/k3s:$K3S_TAG as k3s +FROM nvcr.io/nvidia/cuda:$CUDA_TAG -# Provide custom containerd configuration to configure the nvidia-container-runtime -RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/ +# Install the NVIDIA container toolkit +RUN apt-get update && apt-get install -y curl \ + && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ + && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ + sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ + tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \ + && apt-get update && apt-get install -y nvidia-container-toolkit \ + && nvidia-ctk runtime configure --runtime=containerd -COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl +COPY --from=k3s / / --exclude=/bin +COPY --from=k3s /bin /bin # Deploy the nvidia driver plugin on startup -RUN mkdir -p /var/lib/rancher/k3s/server/manifests - COPY device-plugin-daemonset.yaml /var/lib/rancher/k3s/server/manifests/nvidia-device-plugin-daemonset.yaml VOLUME /var/lib/kubelet diff --git a/v5.6.3/usage/advanced/cuda/build.sh b/v5.6.3/usage/advanced/cuda/build.sh index 562601dc..afbc475b 100644 --- a/v5.6.3/usage/advanced/cuda/build.sh +++ b/v5.6.3/usage/advanced/cuda/build.sh @@ -2,20 +2,18 @@ set -euxo pipefail -K3S_TAG=${K3S_TAG:="v1.21.2-k3s1"} # replace + with -, if needed +K3S_TAG=${K3S_TAG:="v1.28.8-k3s1"} # replace + with -, if needed +CUDA_TAG=${CUDA_TAG:="12.4.1-base-ubuntu22.04"} IMAGE_REGISTRY=${IMAGE_REGISTRY:="MY_REGISTRY"} IMAGE_REPOSITORY=${IMAGE_REPOSITORY:="rancher/k3s"} -IMAGE_TAG="$K3S_TAG-cuda" +IMAGE_TAG="$K3S_TAG-cuda-$CUDA_TAG" IMAGE=${IMAGE:="$IMAGE_REGISTRY/$IMAGE_REPOSITORY:$IMAGE_TAG"} -NVIDIA_CONTAINER_RUNTIME_VERSION=${NVIDIA_CONTAINER_RUNTIME_VERSION:="3.5.0-1"} - echo "IMAGE=$IMAGE" -# due to some unknown reason, copying symlinks fails with buildkit enabled -DOCKER_BUILDKIT=0 docker build \ +docker build \ --build-arg K3S_TAG=$K3S_TAG \ - --build-arg NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION \ + --build-arg CUDA_TAG=$CUDA_TAG \ -t $IMAGE . docker push $IMAGE echo "Done!" \ No newline at end of file diff --git a/v5.6.3/usage/advanced/cuda/config.toml.tmpl b/v5.6.3/usage/advanced/cuda/config.toml.tmpl deleted file mode 100644 index 4d5c7fa4..00000000 --- a/v5.6.3/usage/advanced/cuda/config.toml.tmpl +++ /dev/null @@ -1,55 +0,0 @@ -[plugins.opt] - path = "{{ .NodeConfig.Containerd.Opt }}" - -[plugins.cri] - stream_server_address = "127.0.0.1" - stream_server_port = "10010" - -{{- if .IsRunningInUserNS }} - disable_cgroup = true - disable_apparmor = true - restrict_oom_score_adj = true -{{end}} - -{{- if .NodeConfig.AgentConfig.PauseImage }} - sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}" -{{end}} - -{{- if not .NodeConfig.NoFlannel }} -[plugins.cri.cni] - bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}" - conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}" -{{end}} - -[plugins.cri.containerd.runtimes.runc] - # ---- changed from 'io.containerd.runc.v2' for GPU support - runtime_type = "io.containerd.runtime.v1.linux" - -# ---- added for GPU support -[plugins.linux] - runtime = "nvidia-container-runtime" - -{{ if .PrivateRegistryConfig }} -{{ if .PrivateRegistryConfig.Mirrors }} -[plugins.cri.registry.mirrors]{{end}} -{{range $k, $v := .PrivateRegistryConfig.Mirrors }} -[plugins.cri.registry.mirrors."{{$k}}"] - endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}] -{{end}} - -{{range $k, $v := .PrivateRegistryConfig.Configs }} -{{ if $v.Auth }} -[plugins.cri.registry.configs."{{$k}}".auth] - {{ if $v.Auth.Username }}username = "{{ $v.Auth.Username }}"{{end}} - {{ if $v.Auth.Password }}password = "{{ $v.Auth.Password }}"{{end}} - {{ if $v.Auth.Auth }}auth = "{{ $v.Auth.Auth }}"{{end}} - {{ if $v.Auth.IdentityToken }}identitytoken = "{{ $v.Auth.IdentityToken }}"{{end}} -{{end}} -{{ if $v.TLS }} -[plugins.cri.registry.configs."{{$k}}".tls] - {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}} - {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}} - {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}} -{{end}} -{{end}} -{{end}} \ No newline at end of file diff --git a/v5.6.3/usage/advanced/cuda/cuda-vector-add.yaml b/v5.6.3/usage/advanced/cuda/cuda-vector-add.yaml index e22849b4..5b7e5b66 100644 --- a/v5.6.3/usage/advanced/cuda/cuda-vector-add.yaml +++ b/v5.6.3/usage/advanced/cuda/cuda-vector-add.yaml @@ -3,6 +3,7 @@ kind: Pod metadata: name: cuda-vector-add spec: + runtimeClassName: nvidia # Explicitly request the runtime restartPolicy: OnFailure containers: - name: cuda-vector-add diff --git a/v5.6.3/usage/advanced/cuda/device-plugin-daemonset.yaml b/v5.6.3/usage/advanced/cuda/device-plugin-daemonset.yaml index 6bb521a3..a52bb06d 100644 --- a/v5.6.3/usage/advanced/cuda/device-plugin-daemonset.yaml +++ b/v5.6.3/usage/advanced/cuda/device-plugin-daemonset.yaml @@ -1,3 +1,9 @@ +apiVersion: node.k8s.io/v1 +kind: RuntimeClass +metadata: + name: nvidia +handler: nvidia +--- apiVersion: apps/v1 kind: DaemonSet metadata: @@ -7,35 +13,37 @@ spec: selector: matchLabels: name: nvidia-device-plugin-ds + updateStrategy: + type: RollingUpdate template: metadata: - # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler - # reserves resources for critical add-on pods so that they can be rescheduled after - # a failure. This annotation works in tandem with the toleration below. - annotations: - scheduler.alpha.kubernetes.io/critical-pod: "" labels: name: nvidia-device-plugin-ds spec: + runtimeClassName: nvidia # Explicitly request the runtime tolerations: - # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode. - # This, along with the annotation above marks this pod as a critical add-on. - - key: CriticalAddonsOnly + - key: nvidia.com/gpu operator: Exists + effect: NoSchedule + # Mark this pod as a critical add-on; when enabled, the critical add-on + # scheduler reserves resources for critical add-on pods so that they can + # be rescheduled after a failure. + # See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ + priorityClassName: "system-node-critical" containers: - - env: - - name: DP_DISABLE_HEALTHCHECKS - value: xids - image: nvidia/k8s-device-plugin:1.11 + - image: nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2 name: nvidia-device-plugin-ctr + env: + - name: FAIL_ON_INIT_ERROR + value: "false" securityContext: - allowPrivilegeEscalation: true + allowPrivilegeEscalation: false capabilities: drop: ["ALL"] volumeMounts: - - name: device-plugin - mountPath: /var/lib/kubelet/device-plugins - volumes: - name: device-plugin - hostPath: - path: /var/lib/kubelet/device-plugins \ No newline at end of file + mountPath: /var/lib/kubelet/device-plugins + volumes: + - name: device-plugin + hostPath: + path: /var/lib/kubelet/device-plugins \ No newline at end of file diff --git a/v5.6.3/usage/advanced/cuda/index.html b/v5.6.3/usage/advanced/cuda/index.html index d23f9dce..7b8191e3 100644 --- a/v5.6.3/usage/advanced/cuda/index.html +++ b/v5.6.3/usage/advanced/cuda/index.html @@ -662,13 +662,6 @@ Dockerfile - - -
  • - - Configure containerd - -
  • @@ -695,13 +688,6 @@ Run and test the custom image with k3d -
  • - -
  • - - Known issues - -
  • @@ -1663,13 +1649,6 @@ Dockerfile -
  • - -
  • - - Configure containerd - -
  • @@ -1696,13 +1675,6 @@ Run and test the custom image with k3d -
  • - -
  • - - Known issues - -
  • @@ -1748,42 +1720,25 @@ The native K3s image is based on Alpine but the NVIDIA container runtime is not To get around this we need to build the image with a supported base image.

    Dockerfile

    Dockerfile:

    -
    ARG K3S_TAG="v1.21.2-k3s1"
    -FROM rancher/k3s:$K3S_TAG as k3s
    -
    -FROM nvidia/cuda:11.2.0-base-ubuntu18.04
    -
    -ARG NVIDIA_CONTAINER_RUNTIME_VERSION
    -ENV NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION
    +
    ARG K3S_TAG="v1.28.8-k3s1"
    +ARG CUDA_TAG="12.4.1-base-ubuntu22.04"
     
    -RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
    -
    -RUN apt-get update && \
    -    apt-get -y install gnupg2 curl
    -
    -# Install NVIDIA Container Runtime
    -RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
    -
    -RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
    -
    -RUN apt-get update && \
    -    apt-get -y install nvidia-container-runtime=${NVIDIA_CONTAINER_RUNTIME_VERSION}
    -
    -COPY --from=k3s / /
    -
    -RUN mkdir -p /etc && \
    -    echo 'hosts: files dns' > /etc/nsswitch.conf
    -
    -RUN chmod 1777 /tmp
    +FROM rancher/k3s:$K3S_TAG as k3s
    +FROM nvcr.io/nvidia/cuda:$CUDA_TAG
     
    -# Provide custom containerd configuration to configure the nvidia-container-runtime
    -RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/
    +# Install the NVIDIA container toolkit
    +RUN apt-get update && apt-get install -y curl \
    +    && curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
    +    && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
    +      sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
    +      tee /etc/apt/sources.list.d/nvidia-container-toolkit.list \
    +    && apt-get update && apt-get install -y nvidia-container-toolkit \
    +    && nvidia-ctk runtime configure --runtime=containerd
     
    -COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
    +COPY --from=k3s / / --exclude=/bin
    +COPY --from=k3s /bin /bin
     
     # Deploy the nvidia driver plugin on startup
    -RUN mkdir -p /var/lib/rancher/k3s/server/manifests
    -
     COPY device-plugin-daemonset.yaml /var/lib/rancher/k3s/server/manifests/nvidia-device-plugin-daemonset.yaml
     
     VOLUME /var/lib/kubelet
    @@ -1799,76 +1754,23 @@ To get around this we need to build the image with a supported base image.

    This Dockerfile is based on the K3s Dockerfile The following changes are applied:

      -
    1. Change the base images to nvidia/cuda:11.2.0-base-ubuntu18.04 so the NVIDIA Container Runtime can be installed. The version of cuda:xx.x.x must match the one you’re planning to use.
    2. -
    3. Add a custom containerd config.toml template to add the NVIDIA Container Runtime. This replaces the default runc runtime
    4. -
    5. Add a manifest for the NVIDIA driver plugin for Kubernetes
    6. +
    7. Change the base images to nvidia/cuda:12.4.1-base-ubuntu22.04 so the NVIDIA Container Toolkit can be installed. The version of cuda:xx.x.x must match the one you’re planning to use.
    8. +
    9. Add a manifest for the NVIDIA driver plugin for Kubernetes with an added RuntimeClass definition. See k3s documentation.
    -

    Configure containerd

    -

    We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3s provides a way to do this using a config.toml.tmpl file. More information can be found on the K3s site.

    -
    [plugins.opt]
    -  path = "{{ .NodeConfig.Containerd.Opt }}"
    -
    -[plugins.cri]
    -  stream_server_address = "127.0.0.1"
    -  stream_server_port = "10010"
    -
    -{{- if .IsRunningInUserNS }}
    -  disable_cgroup = true
    -  disable_apparmor = true
    -  restrict_oom_score_adj = true
    -{{end}}
    -
    -{{- if .NodeConfig.AgentConfig.PauseImage }}
    -  sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
    -{{end}}
    -
    -{{- if not .NodeConfig.NoFlannel }}
    -[plugins.cri.cni]
    -  bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
    -  conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
    -{{end}}
    -
    -[plugins.cri.containerd.runtimes.runc]
    -  # ---- changed from 'io.containerd.runc.v2' for GPU support
    -  runtime_type = "io.containerd.runtime.v1.linux"
    -
    -# ---- added for GPU support
    -[plugins.linux]
    -  runtime = "nvidia-container-runtime"
    -
    -{{ if .PrivateRegistryConfig }}
    -{{ if .PrivateRegistryConfig.Mirrors }}
    -[plugins.cri.registry.mirrors]{{end}}
    -{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
    -[plugins.cri.registry.mirrors."{{$k}}"]
    -  endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
    -{{end}}
    -
    -{{range $k, $v := .PrivateRegistryConfig.Configs }}
    -{{ if $v.Auth }}
    -[plugins.cri.registry.configs."{{$k}}".auth]
    -  {{ if $v.Auth.Username }}username = "{{ $v.Auth.Username }}"{{end}}
    -  {{ if $v.Auth.Password }}password = "{{ $v.Auth.Password }}"{{end}}
    -  {{ if $v.Auth.Auth }}auth = "{{ $v.Auth.Auth }}"{{end}}
    -  {{ if $v.Auth.IdentityToken }}identitytoken = "{{ $v.Auth.IdentityToken }}"{{end}}
    -{{end}}
    -{{ if $v.TLS }}
    -[plugins.cri.registry.configs."{{$k}}".tls]
    -  {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
    -  {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
    -  {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
    -{{end}}
    -{{end}}
    -{{end}}
    -

    The NVIDIA device plugin

    -

    To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a deamonset and allows you to automatically:

    +

    To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a daemonset and allows you to automatically:

    • Expose the number of GPUs on each nodes of your cluster
    • Keep track of the health of your GPUs
    • Run GPU enabled containers in your Kubernetes cluster.
    -
    apiVersion: apps/v1
    +
    apiVersion: node.k8s.io/v1
    +kind: RuntimeClass
    +metadata:
    +  name: nvidia
    +handler: nvidia
    +---
    +apiVersion: apps/v1
     kind: DaemonSet
     metadata:
       name: nvidia-device-plugin-daemonset
    @@ -1877,69 +1779,84 @@ The following changes are applied:

    selector: matchLabels: name: nvidia-device-plugin-ds + updateStrategy: + type: RollingUpdate template: metadata: - # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler - # reserves resources for critical add-on pods so that they can be rescheduled after - # a failure. This annotation works in tandem with the toleration below. - annotations: - scheduler.alpha.kubernetes.io/critical-pod: "" labels: name: nvidia-device-plugin-ds spec: + runtimeClassName: nvidia # Explicitly request the runtime tolerations: - # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode. - # This, along with the annotation above marks this pod as a critical add-on. - - key: CriticalAddonsOnly + - key: nvidia.com/gpu operator: Exists + effect: NoSchedule + # Mark this pod as a critical add-on; when enabled, the critical add-on + # scheduler reserves resources for critical add-on pods so that they can + # be rescheduled after a failure. + # See https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/ + priorityClassName: "system-node-critical" containers: - - env: - - name: DP_DISABLE_HEALTHCHECKS - value: xids - image: nvidia/k8s-device-plugin:1.11 + - image: nvcr.io/nvidia/k8s-device-plugin:v0.15.0-rc.2 name: nvidia-device-plugin-ctr + env: + - name: FAIL_ON_INIT_ERROR + value: "false" securityContext: - allowPrivilegeEscalation: true + allowPrivilegeEscalation: false capabilities: drop: ["ALL"] volumeMounts: - - name: device-plugin - mountPath: /var/lib/kubelet/device-plugins - volumes: - name: device-plugin - hostPath: - path: /var/lib/kubelet/device-plugins + mountPath: /var/lib/kubelet/device-plugins + volumes: + - name: device-plugin + hostPath: + path: /var/lib/kubelet/device-plugins +
    +

    Two modifications have been made to the original NVIDIA daemonset:

    +
      +
    1. +

      Added RuntimeClass definition to the YAML frontmatter.

      +
      apiVersion: node.k8s.io/v1
      +kind: RuntimeClass
      +metadata:
      +  name: nvidia
      +handler: nvidia
       
      +
    2. +
    3. +

      Added runtimeClassName: nvidia to the Pod spec.

      +
    4. +
    +

    Note: you must explicitly add runtimeClassName: nvidia to all your Pod specs to use the GPU. See k3s documentation.

    Build the K3s image

    To build the custom image we need to build K3s because we need the generated output.

    Put the following files in a directory:

    -

    The build.sh script is configured using exports & defaults to v1.21.2+k3s1. Please set at least the IMAGE_REGISTRY variable! The script performs the following steps builds the custom K3s image including the nvidia drivers.

    +

    The build.sh script is configured using exports & defaults to v1.28.8+k3s1. Please set at least the IMAGE_REGISTRY variable! The script performs the following steps builds the custom K3s image including the nvidia drivers.

    build.sh:

    #!/bin/bash
     
     set -euxo pipefail
     
    -K3S_TAG=${K3S_TAG:="v1.21.2-k3s1"} # replace + with -, if needed
    +K3S_TAG=${K3S_TAG:="v1.28.8-k3s1"} # replace + with -, if needed
    +CUDA_TAG=${CUDA_TAG:="12.4.1-base-ubuntu22.04"}
     IMAGE_REGISTRY=${IMAGE_REGISTRY:="MY_REGISTRY"}
     IMAGE_REPOSITORY=${IMAGE_REPOSITORY:="rancher/k3s"}
    -IMAGE_TAG="$K3S_TAG-cuda"
    +IMAGE_TAG="$K3S_TAG-cuda-$CUDA_TAG"
     IMAGE=${IMAGE:="$IMAGE_REGISTRY/$IMAGE_REPOSITORY:$IMAGE_TAG"}
     
    -NVIDIA_CONTAINER_RUNTIME_VERSION=${NVIDIA_CONTAINER_RUNTIME_VERSION:="3.5.0-1"}
    -
     echo "IMAGE=$IMAGE"
     
    -# due to some unknown reason, copying symlinks fails with buildkit enabled
    -DOCKER_BUILDKIT=0 docker build \
    +docker build \
       --build-arg K3S_TAG=$K3S_TAG \
    -  --build-arg NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION \
    +  --build-arg CUDA_TAG=$CUDA_TAG \
       -t $IMAGE .
     docker push $IMAGE
     echo "Done!"
    @@ -1963,10 +1880,6 @@ Test PASSED
     Done
     

    If the cuda-vector-add pod is stuck in Pending state, probably the device-driver daemonset didn’t get deployed correctly from the auto-deploy manifests. In that case, you can apply it manually via kubectl apply -f device-plugin-daemonset.yaml.

    -

    Known issues

    -
      -
    • This approach does not work on WSL2 yet. The NVIDIA driver plugin and container runtime rely on the NVIDIA Management Library (NVML) which is not yet supported. See the CUDA on WSL User Guide.
    • -

    Acknowledgements

    Most of the information in this article was obtained from various sources:


    @@ -1987,7 +1901,7 @@ Done Last update: - October 27, 2023 + April 15, 2024