From adc0d242c9d2db1f23919da7249193a1e8cd4db3 Mon Sep 17 00:00:00 2001 From: iwilltry42 Date: Fri, 17 Jul 2020 08:03:28 +0000 Subject: [PATCH] commit 9f76db2d4aa9b4115a0d73819f340705d6a2e610 Author: iwilltry42 Date: Fri Jul 17 10:02:32 2020 +0200 root: same help text for version and --version --- index.html | 6 +++--- search/search_index.json | 2 +- sitemap.xml.gz | Bin 304 -> 304 bytes static/asciicast/20200715_k3d.01.cast | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/index.html b/index.html index 8251c05c..b6cd7dad 100644 --- a/index.html +++ b/index.html @@ -711,8 +711,8 @@
  • use the install script to grab a specific release (via TAG environment variable):

      -
    • wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v3.0.0-rc.7 bash
    • -
    • curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v3.0.0-rc.7 bash
    • +
    • wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v3.0.0 bash
    • +
    • curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG=v3.0.0 bash
  • @@ -750,7 +750,7 @@
    - Last update: July 15, 2020 + Last update: July 17, 2020
    diff --git a/search/search_index.json b/search/search_index.json index d131a286..b20b8e89 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 This page is targeting k3d v3.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo Learning \u00b6 Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo Requirements \u00b6 docker Releases \u00b6 Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - - Installation \u00b6 You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0-rc.7 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0-rc.7 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes) Quick Start \u00b6 Create a cluster named mycluster with just a single server node: k3d cluster create mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d kubeconfig merge mycluster --switch-context Use the new cluster with kubectl , e.g.: kubectl get nodes Related Projects \u00b6 k3x : a graphics interface (for Linux) to k3d.","title":"Overview"},{"location":"#overview","text":"This page is targeting k3d v3.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo","title":"Overview"},{"location":"#learning","text":"Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo","title":"Learning"},{"location":"#requirements","text":"docker","title":"Requirements"},{"location":"#releases","text":"Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - -","title":"Releases"},{"location":"#installation","text":"You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0-rc.7 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | TAG = v3.0.0-rc.7 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes)","title":"Installation"},{"location":"#quick-start","text":"Create a cluster named mycluster with just a single server node: k3d cluster create mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d kubeconfig merge mycluster --switch-context Use the new cluster with kubectl , e.g.: kubectl get nodes","title":"Quick Start"},{"location":"#related-projects","text":"k3x : a graphics interface (for Linux) to k3d.","title":"Related Projects"},{"location":"faq/faq/","text":"FAQ / Nice to know \u00b6 Issues with BTRFS \u00b6 As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper Issues with ZFS \u00b6 k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3 ) fails, because the initializing server node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374 Pods evicted due to lack of disk space \u00b6 Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%' Restarting a multi-server cluster or the initializing server node fails \u00b6 What you do: You create a cluster with more than one server node and later, you either stop server-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing server node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262","title":"FAQ / Nice to know"},{"location":"faq/faq/#faq-nice-to-know","text":"","title":"FAQ / Nice to know"},{"location":"faq/faq/#issues-with-btrfs","text":"As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper","title":"Issues with BTRFS"},{"location":"faq/faq/#issues-with-zfs","text":"k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3 ) fails, because the initializing server node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374","title":"Issues with ZFS"},{"location":"faq/faq/#pods-evicted-due-to-lack-of-disk-space","text":"Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'","title":"Pods evicted due to lack of disk space"},{"location":"faq/faq/#restarting-a-multi-server-cluster-or-the-initializing-server-node-fails","text":"What you do: You create a cluster with more than one server node and later, you either stop server-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing server node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262","title":"Restarting a multi-server cluster or the initializing server node fails"},{"location":"faq/v1vsv3-comparison/","text":"Feature Comparison: v1 vs. v3 \u00b6 v1.x feature -> implementation in v3 \u00b6 - k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#feature-comparison-v1-vs-v3","text":"","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#v1x-feature-implementation-in-v3","text":"- k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"v1.x feature -> implementation in v3"},{"location":"internals/defaults/","text":"Defaults \u00b6 multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/defaults/#defaults","text":"multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/networking/","text":"Networking \u00b6 Related issues: rancher/k3d #220 Introduction \u00b6 By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle. Connecting to docker \u201cinternal\u201d/pre-defined networks \u00b6 host network \u00b6 When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node. bridge network \u00b6 By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though. none \u201cnetwork\u201d \u00b6 Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"Networking"},{"location":"internals/networking/#networking","text":"Related issues: rancher/k3d #220","title":"Networking"},{"location":"internals/networking/#introduction","text":"By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.","title":"Introduction"},{"location":"internals/networking/#connecting-to-docker-internalpre-defined-networks","text":"","title":"Connecting to docker \"internal\"/pre-defined networks"},{"location":"internals/networking/#host-network","text":"When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node.","title":"host network"},{"location":"internals/networking/#bridge-network","text":"By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.","title":"bridge network"},{"location":"internals/networking/#none-network","text":"Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"none \"network\""},{"location":"usage/commands/","text":"Command Tree \u00b6 k3d --verbose # enable verbose (debug) logging (default: false) --version # show k3d and k3s version -h, --help # show help text version # show k3d and k3s version help [ COMMAND ] # show help text for any command completion [ bash | zsh | ( psh | powershell )] # generate completion scripts for common shells cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -s, --servers # specify how many server nodes you want to create --network # specify a network you want to connect to --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all server nodes to be ready before returning -a, --agents # specify how many agent nodes you want to create start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all servers and server-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes list NODENAME --no-headers # do not print headers kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and save it into a file in $HOME/.k3d -a, --all # get kubeconfigs from all clusters merge ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into an existing kubeconfig -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch-context # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) -d, --merge-default-kubeconfig # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion","title":"Command Tree"},{"location":"usage/commands/#command-tree","text":"k3d --verbose # enable verbose (debug) logging (default: false) --version # show k3d and k3s version -h, --help # show help text version # show k3d and k3s version help [ COMMAND ] # show help text for any command completion [ bash | zsh | ( psh | powershell )] # generate completion scripts for common shells cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -s, --servers # specify how many server nodes you want to create --network # specify a network you want to connect to --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all server nodes to be ready before returning -a, --agents # specify how many agent nodes you want to create start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all servers and server-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes list NODENAME --no-headers # do not print headers kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and save it into a file in $HOME/.k3d -a, --all # get kubeconfigs from all clusters merge ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into an existing kubeconfig -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch-context # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) -d, --merge-default-kubeconfig # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion","title":"Command Tree"},{"location":"usage/kubeconfig/","text":"Handling Kubeconfigs \u00b6 By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config ) Getting the kubeconfig for a newly created cluster \u00b6 Create a new kubeconfig file after cluster creation k3d kubeconfig get mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig get mycluster ) Update your default kubeconfig upon cluster creation k3d cluster create mycluster --update-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --merge-default-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag. Removing cluster details from the kubeconfig \u00b6 k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists. Handling multiple clusters \u00b6 k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#handling-kubeconfigs","text":"By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config )","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","text":"Create a new kubeconfig file after cluster creation k3d kubeconfig get mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig get mycluster ) Update your default kubeconfig upon cluster creation k3d cluster create mycluster --update-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --merge-default-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag.","title":"Getting the kubeconfig for a newly created cluster"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","text":"k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.","title":"Removing cluster details from the kubeconfig"},{"location":"usage/kubeconfig/#handling-multiple-clusters","text":"k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling multiple clusters"},{"location":"usage/multiserver/","text":"Creating multi-server clusters \u00b6 Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes. Embedded dqlite \u00b6 Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3 Adding server nodes to a running cluster \u00b6 In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#creating-multi-server-clusters","text":"Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#embedded-dqlite","text":"Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3","title":"Embedded dqlite"},{"location":"usage/multiserver/#adding-server-nodes-to-a-running-cluster","text":"In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Adding server nodes to a running cluster"},{"location":"usage/guides/exposing_services/","text":"Exposing Services \u00b6 1. via Ingress \u00b6 In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p 8081 :80@loadbalancer --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d kubeconfig get k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / backend : serviceName : nginx servicePort : 80 Curl it via localhost curl localhost:8081/ 2. via NodePort \u00b6 Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p 8082 :30080@agent [ 0 ] --agents 2 Note: Kubernetes\u2019 default NodePort range is 30000-32767 \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#exposing-services","text":"","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#1-via-ingress","text":"In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p 8081 :80@loadbalancer --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d kubeconfig get k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / backend : serviceName : nginx servicePort : 80 Curl it via localhost curl localhost:8081/","title":"1. via Ingress"},{"location":"usage/guides/exposing_services/#2-via-nodeport","text":"Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p 8082 :30080@agent [ 0 ] --agents 2 Note: Kubernetes\u2019 default NodePort range is 30000-32767 \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"2. via NodePort"},{"location":"usage/guides/registries/","text":"Registries \u00b6 Registries configuration file \u00b6 You can add registries by specifying them in a registries.yaml and mounting them at creation time: k3d cluster create mycluster --volume /home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml . This file is a regular k3s registries configuration file , and looks like this: mirrors : \"my.company.registry:5000\" : endpoint : - http://my.company.registry:5000 In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000 . Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates . Authenticated registries \u00b6 When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra Secure registries \u00b6 When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml --volume ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem Using a local registry \u00b6 Using the k3d registry \u00b6 Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x Using your own local registry \u00b6 You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry . Pushing to your local registry address \u00b6 As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1) Testing your registry \u00b6 You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat <= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates .","title":"Registries configuration file"},{"location":"usage/guides/registries/#authenticated-registries","text":"When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra","title":"Authenticated registries"},{"location":"usage/guides/registries/#secure-registries","text":"When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml --volume ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem","title":"Secure registries"},{"location":"usage/guides/registries/#using-a-local-registry","text":"","title":"Using a local registry"},{"location":"usage/guides/registries/#using-the-k3d-registry","text":"Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x","title":"Using the k3d registry"},{"location":"usage/guides/registries/#using-your-own-local-registry","text":"You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry .","title":"Using your own local registry"},{"location":"usage/guides/registries/#pushing-to-your-local-registry-address","text":"As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1)","title":"Pushing to your local registry address"},{"location":"usage/guides/registries/#testing-your-registry","text":"You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat < implementation in v3 \u00b6 - k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#feature-comparison-v1-vs-v3","text":"","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#v1x-feature-implementation-in-v3","text":"- k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d cluster create CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d node create NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d cluster delete CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d cluster stop CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d cluster start CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"v1.x feature -> implementation in v3"},{"location":"internals/defaults/","text":"Defaults \u00b6 multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/defaults/#defaults","text":"multiple server nodes by default, when --server > 1 and no --datastore-x option is set, the first server node (server-0) will be the initializing server node the initializing server node will have the --cluster-init flag appended all other server nodes will refer to the initializing server node via --server https://:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if --[update|merge]-default-kubeconfig is set, we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/networking/","text":"Networking \u00b6 Related issues: rancher/k3d #220 Introduction \u00b6 By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle. Connecting to docker \u201cinternal\u201d/pre-defined networks \u00b6 host network \u00b6 When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node. bridge network \u00b6 By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though. none \u201cnetwork\u201d \u00b6 Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"Networking"},{"location":"internals/networking/#networking","text":"Related issues: rancher/k3d #220","title":"Networking"},{"location":"internals/networking/#introduction","text":"By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.","title":"Introduction"},{"location":"internals/networking/#connecting-to-docker-internalpre-defined-networks","text":"","title":"Connecting to docker \"internal\"/pre-defined networks"},{"location":"internals/networking/#host-network","text":"When using the --network flag to connect to the host network (i.e. k3d cluster create --network host ), you won\u2019t be able to create more than one server node . An edge case would be one server node (with agent disabled) and one agent node.","title":"host network"},{"location":"internals/networking/#bridge-network","text":"By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.","title":"bridge network"},{"location":"internals/networking/#none-network","text":"Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"none \"network\""},{"location":"usage/commands/","text":"Command Tree \u00b6 k3d --verbose # enable verbose (debug) logging (default: false) --version # show k3d and k3s version -h, --help # show help text version # show k3d and k3s version help [ COMMAND ] # show help text for any command completion [ bash | zsh | ( psh | powershell )] # generate completion scripts for common shells cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -s, --servers # specify how many server nodes you want to create --network # specify a network you want to connect to --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all server nodes to be ready before returning -a, --agents # specify how many agent nodes you want to create start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all servers and server-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes list NODENAME --no-headers # do not print headers kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and save it into a file in $HOME/.k3d -a, --all # get kubeconfigs from all clusters merge ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into an existing kubeconfig -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch-context # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) -d, --merge-default-kubeconfig # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion","title":"Command Tree"},{"location":"usage/commands/#command-tree","text":"k3d --verbose # enable verbose (debug) logging (default: false) --version # show k3d and k3s version -h, --help # show help text version # show k3d and k3s version help [ COMMAND ] # show help text for any command completion [ bash | zsh | ( psh | powershell )] # generate completion scripts for common shells cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' create --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -s, --servers # specify how many server nodes you want to create --network # specify a network you want to connect to --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all server nodes to be ready before returning -a, --agents # specify how many agent nodes you want to create start CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all servers and server-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning stop CLUSTERNAME # stop a cluster -a, --all # stop all clusters delete CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters list [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node create NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet start NODENAME # start a (stopped) node stop NODENAME # stop a node delete NODENAME # delete an existing node -a, --all # delete all existing nodes list NODENAME --no-headers # do not print headers kubeconfig get ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and save it into a file in $HOME/.k3d -a, --all # get kubeconfigs from all clusters merge ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) # get kubeconfig from cluster(s) and merge it/them into an existing kubeconfig -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch-context # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) -d, --merge-default-kubeconfig # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config) image import [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion","title":"Command Tree"},{"location":"usage/kubeconfig/","text":"Handling Kubeconfigs \u00b6 By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config ) Getting the kubeconfig for a newly created cluster \u00b6 Create a new kubeconfig file after cluster creation k3d kubeconfig get mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig get mycluster ) Update your default kubeconfig upon cluster creation k3d cluster create mycluster --update-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --merge-default-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag. Removing cluster details from the kubeconfig \u00b6 k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists. Handling multiple clusters \u00b6 k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#handling-kubeconfigs","text":"By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config )","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","text":"Create a new kubeconfig file after cluster creation k3d kubeconfig get mycluster Note: this will create (or update) the file $HOME/.k3d/kubeconfig-mycluster.yaml Tip: Use it: export KUBECONFIG = $( k3d kubeconfig get mycluster ) Update your default kubeconfig upon cluster creation k3d cluster create mycluster --update-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update your default kubeconfig after cluster creation k3d kubeconfig merge mycluster --merge-default-kubeconfig Note: this won\u2019t switch the current-context (append --switch-context to do so) Update a different kubeconfig after cluster creation k3d kubeconfig merge mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context by default. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the kubeconfig merge command by adding the --switch-context flag.","title":"Getting the kubeconfig for a newly created cluster"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","text":"k3d cluster delete mycluster will always remove the details for mycluster from the default kubeconfig. It will also delete the respective kubeconfig file in $HOME/.k3d/ if it exists.","title":"Removing cluster details from the kubeconfig"},{"location":"usage/kubeconfig/#handling-multiple-clusters","text":"k3d kubeconfig merge let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file if --merge-default-kubeconfig or --output is specified. If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml ) will be returned. Note, that with multiple cluster specified, the --switch-context flag will change the current context to the cluster which was last in the list.","title":"Handling multiple clusters"},{"location":"usage/multiserver/","text":"Creating multi-server clusters \u00b6 Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes. Embedded dqlite \u00b6 Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3 Adding server nodes to a running cluster \u00b6 In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#creating-multi-server-clusters","text":"Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 server nodes.","title":"Creating multi-server clusters"},{"location":"usage/multiserver/#embedded-dqlite","text":"Create a cluster with 3 server nodes using k3s\u2019 embedded dqlite database. The first server to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes. k3d cluster create multiserver --servers 3","title":"Embedded dqlite"},{"location":"usage/multiserver/#adding-server-nodes-to-a-running-cluster","text":"In theory (and also in practice in most cases), this is as easy as executing the following command: k3d node create newserver --cluster multiserver --role server There\u2019s a trap! If your cluster was initially created with only a single server node, then this will fail. That\u2019s because the initial server node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Adding server nodes to a running cluster"},{"location":"usage/guides/exposing_services/","text":"Exposing Services \u00b6 1. via Ingress \u00b6 In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p 8081 :80@loadbalancer --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d kubeconfig get k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / backend : serviceName : nginx servicePort : 80 Curl it via localhost curl localhost:8081/ 2. via NodePort \u00b6 Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p 8082 :30080@agent [ 0 ] --agents 2 Note: Kubernetes\u2019 default NodePort range is 30000-32767 \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#exposing-services","text":"","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#1-via-ingress","text":"In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d cluster create --api-port 6550 -p 8081 :80@loadbalancer --agents 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the serverlb that\u2019s deployed in front of a cluster\u2019s server nodes all ports exposed on the serverlb will be proxied to the same ports on all server nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d kubeconfig get k3s-default ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / backend : serviceName : nginx servicePort : 80 Curl it via localhost curl localhost:8081/","title":"1. via Ingress"},{"location":"usage/guides/exposing_services/#2-via-nodeport","text":"Create a cluster, mapping the port 30080 from agent-0 to localhost:8082 k3d cluster create mycluster -p 8082 :30080@agent [ 0 ] --agents 2 Note: Kubernetes\u2019 default NodePort range is 30000-32767 \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"2. via NodePort"},{"location":"usage/guides/registries/","text":"Registries \u00b6 Registries configuration file \u00b6 You can add registries by specifying them in a registries.yaml and mounting them at creation time: k3d cluster create mycluster --volume /home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml . This file is a regular k3s registries configuration file , and looks like this: mirrors : \"my.company.registry:5000\" : endpoint : - http://my.company.registry:5000 In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000 . Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates . Authenticated registries \u00b6 When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra Secure registries \u00b6 When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml --volume ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem Using a local registry \u00b6 Using the k3d registry \u00b6 Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x Using your own local registry \u00b6 You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry . Pushing to your local registry address \u00b6 As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1) Testing your registry \u00b6 You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat <= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates .","title":"Registries configuration file"},{"location":"usage/guides/registries/#authenticated-registries","text":"When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra","title":"Authenticated registries"},{"location":"usage/guides/registries/#secure-registries","text":"When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d cluster create --volume ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml --volume ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem","title":"Secure registries"},{"location":"usage/guides/registries/#using-a-local-registry","text":"","title":"Using a local registry"},{"location":"usage/guides/registries/#using-the-k3d-registry","text":"Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x","title":"Using the k3d registry"},{"location":"usage/guides/registries/#using-your-own-local-registry","text":"You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry .","title":"Using your own local registry"},{"location":"usage/guides/registries/#pushing-to-your-local-registry-address","text":"As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1)","title":"Pushing to your local registry address"},{"location":"usage/guides/registries/#testing-your-registry","text":"You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat <