Little helper to run CNCF's k3s in Docker
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
k3d/search/search_index.json

1 lines
50 KiB

{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 This page is targeting k3d v3.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo Learning \u00b6 Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo Requirements \u00b6 docker Releases \u00b6 Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - - Installation \u00b6 You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG = v3.0.0-beta.0 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG = v3.0.0-beta.0 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes) Quick Start \u00b6 Create a cluster named mycluster with just a single master node: k3d create cluster mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d get kubeconfig mycluster --switch Use the new cluster with kubectl , e.g.: kubectl get nodes Related Projects \u00b6 k3x : a graphics interface (for Linux) to k3d.","title":"Overview"},{"location":"#overview","text":"This page is targeting k3d v3.0.0 and newer! k3d is a lightweight wrapper to run k3s (Rancher Lab\u2019s minimal Kubernetes distribution) in docker. k3d makes it very easy to create single- and multi-node k3s clusters in docker, e.g. for local development on Kubernetes. View a quick demo","title":"Overview"},{"location":"#learning","text":"Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube) k3d demo repository: iwilltry42/k3d-demo","title":"Learning"},{"location":"#requirements","text":"docker","title":"Requirements"},{"location":"#releases","text":"Platform Stage Version Release Date GitHub Releases stable GitHub Releases latest Homebrew - -","title":"Releases"},{"location":"#installation","text":"You have several options there: use the install script to grab the latest release: wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash use the install script to grab a specific release (via TAG environment variable): wget: wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG = v3.0.0-beta.0 bash curl: curl -s https://raw.githubusercontent.com/rancher/k3d/master/install.sh | TAG = v3.0.0-beta.0 bash use Homebrew : brew install k3d (Homebrew is available for MacOS and Linux) Formula can be found in homebrew/homebrew-core and is mirrored to homebrew/linuxbrew-core install via AUR package rancher-k3d-bin : yay -S rancher-k3d-bin grab a release from the release tab and install it yourself. install via go: go install github.com/rancher/k3d ( Note : this will give you unreleased/bleeding-edge changes)","title":"Installation"},{"location":"#quick-start","text":"Create a cluster named mycluster with just a single master node: k3d create cluster mycluster Get the new cluster\u2019s connection details merged into your default kubeconfig (usually specified using the KUBECONFIG environment variable or the default path $HOME /.kube/config ) and directly switch to the new context: k3d get kubeconfig mycluster --switch Use the new cluster with kubectl , e.g.: kubectl get nodes","title":"Quick Start"},{"location":"#related-projects","text":"k3x : a graphics interface (for Linux) to k3d.","title":"Related Projects"},{"location":"faq/faq/","text":"FAQ / Nice to know \u00b6 Issues with BTRFS \u00b6 As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d create cluster CLUSTER_NAME -v /dev/mapper:/dev/mapper Issues with ZFS \u00b6 k3s currently has no support for ZFS and thus, creating multi-master setups (e.g. k3d create cluster multimaster --masters 3 ) fails, because the initializing master node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374 Pods evicted due to lack of disk space \u00b6 Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d create cluster --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%' Restarting a multi-master cluster or the initializing master node fails \u00b6 What you do: You create a cluster with more than one master node and later, you either stop master-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing master node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262","title":"FAQ / Nice to know"},{"location":"faq/faq/#faq-nice-to-know","text":"","title":"FAQ / Nice to know"},{"location":"faq/faq/#issues-with-btrfs","text":"As @jaredallard pointed out , people running k3d on a system with btrfs , may need to mount /dev/mapper into the nodes for the setup to work. This will do: k3d create cluster CLUSTER_NAME -v /dev/mapper:/dev/mapper","title":"Issues with BTRFS"},{"location":"faq/faq/#issues-with-zfs","text":"k3s currently has no support for ZFS and thus, creating multi-master setups (e.g. k3d create cluster multimaster --masters 3 ) fails, because the initializing master node (server flag --cluster-init ) errors out with the following log: starting kubernetes: preparing server: start cluster and https: raft_init () : io: create I/O capabilities probe file: posix_allocate: operation not supported on socket This issue can be worked around by providing docker with a different filesystem (that\u2019s also better for docker-in-docker stuff). A possible solution can be found here: https://github.com/rancher/k3s/issues/1688#issuecomment-619570374","title":"Issues with ZFS"},{"location":"faq/faq/#pods-evicted-due-to-lack-of-disk-space","text":"Pods go to evicted state after doing X Related issues: #133 - Pods evicted due to NodeHasDiskPressure (collection of #119 and #130) Background: somehow docker runs out of space for the k3d node containers, which triggers a hard eviction in the kubelet Possible fix/workaround by @zer0def : use a docker storage driver which cleans up properly (e.g. overlay2) clean up or expand docker root filesystem change the kubelet\u2019s eviction thresholds upon cluster creation: k3d create cluster --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'","title":"Pods evicted due to lack of disk space"},{"location":"faq/faq/#restarting-a-multi-master-cluster-or-the-initializing-master-node-fails","text":"What you do: You create a cluster with more than one master node and later, you either stop master-0 or stop/start the whole cluster What fails: After the restart, you cannot connect to the cluster anymore and kubectl will give you a lot of errors What causes this issue: it\u2019s a known issue with dqlite in k3s which doesn\u2019t allow the initializing master node to go down What\u2019s the solution: Hopefully, this will be solved by the planned replacement of dqlite with embedded etcd in k3s Related issues: #262","title":"Restarting a multi-master cluster or the initializing master node fails"},{"location":"faq/v1vsv3-comparison/","text":"Feature Comparison: v1 vs. v3 \u00b6 v1.x feature -> implementation in v3 \u00b6 - k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d create cluster CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d create node NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d delete cluster CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d stop cluster CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d start cluster CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d get kubeconfig CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d load image [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#feature-comparison-v1-vs-v3","text":"","title":"Feature Comparison: v1 vs. v3"},{"location":"faq/v1vsv3-comparison/#v1x-feature-implementation-in-v3","text":"- k3d - check-tools -> won't do - shell -> planned: `k3d shell CLUSTER` - --name -> planned: drop (now as arg) - --command -> planned: keep - --shell -> planned: keep (or second arg) - auto, bash, zsh - create -> `k3d create cluster CLUSTERNAME` - --name -> dropped, implemented via arg - --volume -> implemented - --port -> implemented - --port-auto-offset -> TBD - --api-port -> implemented - --wait -> implemented - --image -> implemented - --server-arg -> implemented as `--k3s-server-arg` - --agent-arg -> implemented as `--k3s-agent-arg` - --env -> planned - --label -> planned - --workers -> implemented - --auto-restart -> dropped (docker's `unless-stopped` is set by default) - --enable-registry -> planned (possible consolidation into less registry-related commands?) - --registry-name -> TBD - --registry-port -> TBD - --registry-volume -> TBD - --registries-file -> TBD - --enable-registry-cache -> TBD - (add-node) -> `k3d create node NODENAME` - --role -> implemented - --name -> dropped, implemented as arg - --count -> implemented as `--replicas` - --image -> implemented - --arg -> planned - --env -> planned - --volume -> planned - --k3s -> TBD - --k3s-secret -> TBD - --k3s-token -> TBD - delete -> `k3d delete cluster CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --prune -> TBD - --keep-registry-volume -> TBD - stop -> `k3d stop cluster CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - start -> `k3d start cluster CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - list -> dropped, implemented as `k3d get clusters` - get-kubeconfig -> `k3d get kubeconfig CLUSTERNAME` - --name -> dropped, implemented as arg - --all -> implemented - --overwrite -> implemented - import-images -> `k3d load image [--cluster CLUSTERNAME] [--keep] IMAGES` - --name -> implemented as `--cluster` - --no-remove -> implemented as `--keep-tarball`","title":"v1.x feature -&gt; implementation in v3"},{"location":"internals/defaults/","text":"Defaults \u00b6 multiple master nodes by default, when --master > 1 and no --datastore-x option is set, the first master node (master-0) will be the initializing master node the initializing master node will have the --cluster-init flag appended all other master nodes will refer to the initializing master node via --server https://<init-node>:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if no output is set explicitly (via the --output flag), we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/defaults/#defaults","text":"multiple master nodes by default, when --master > 1 and no --datastore-x option is set, the first master node (master-0) will be the initializing master node the initializing master node will have the --cluster-init flag appended all other master nodes will refer to the initializing master node via --server https://<init-node>:6443 API-Ports by default, we don\u2019t expose any API-Port (no host port mapping) kubeconfig if no output is set explicitly (via the --output flag), we use the default loading rules to get the default kubeconfig: First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified) Second: default kubeconfig in home directory (e.g. $HOME/.kube/config )","title":"Defaults"},{"location":"internals/networking/","text":"Networking \u00b6 Related issues: rancher/k3d #220 Introduction \u00b6 By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle. Connecting to docker \u201cinternal\u201d/pre-defined networks \u00b6 host network \u00b6 When using the --network flag to connect to the host network (i.e. k3d create cluster --network host ), you won\u2019t be able to create more than one master node . An edge case would be one master node (with agent disabled) and one worker node. bridge network \u00b6 By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though. none \u201cnetwork\u201d \u00b6 Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"Networking"},{"location":"internals/networking/#networking","text":"Related issues: rancher/k3d #220","title":"Networking"},{"location":"internals/networking/#introduction","text":"By default, k3d creates a new (docker) network for every new cluster. Using the --network STRING flag upon creation to connect to an existing network. Existing networks won\u2019t be managed by k3d together with the cluster lifecycle.","title":"Introduction"},{"location":"internals/networking/#connecting-to-docker-internalpre-defined-networks","text":"","title":"Connecting to docker \"internal\"/pre-defined networks"},{"location":"internals/networking/#host-network","text":"When using the --network flag to connect to the host network (i.e. k3d create cluster --network host ), you won\u2019t be able to create more than one master node . An edge case would be one master node (with agent disabled) and one worker node.","title":"host network"},{"location":"internals/networking/#bridge-network","text":"By default, every network that k3d creates is working in bridge mode. But when you try to use --network bridge to connect to docker\u2019s internal bridge network, you may run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.","title":"bridge network"},{"location":"internals/networking/#none-network","text":"Well.. this doesn\u2019t really make sense for k3d anyway \u00af_(\u30c4)_/\u00af","title":"none \"network\""},{"location":"usage/commands/","text":"Command Tree \u00b6 k3d --runtime # choose the container runtime (default: docker) --verbose # enable verbose (debug) logging (default: false) create cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' -a, --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -m, --masters # specify how many master nodes you want to create --network # specify a network you want to connect to --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch # (implies --update-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all master nodes to be ready before returning -w, --workers # specify how many worker nodes you want to create node NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet delete cluster CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters node NODENAME # delete an existing node -a, --all # delete all existing nodes start cluster CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all masters and master-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning node NODENAME # start a (stopped) node stop cluster CLUSTERNAME # stop a cluster -a, --all # stop all clusters node # stop a node get cluster [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node NODENAME --no-headers # do not print headers kubeconfig ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) load image [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion completion SHELL # Generate completion scripts version # show k3d build version help [ COMMAND ] # show help text for any command","title":"Command Tree"},{"location":"usage/commands/#command-tree","text":"k3d --runtime # choose the container runtime (default: docker) --verbose # enable verbose (debug) logging (default: false) create cluster [ CLUSTERNAME ] # default cluster name is 'k3s-default' -a, --api-port # specify the port on which the cluster will be accessible (e.g. via kubectl) -i, --image # specify which k3s image should be used for the nodes --k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help) --k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help) -m, --masters # specify how many master nodes you want to create --network # specify a network you want to connect to --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command) -p, --port # add some more port mappings --token # specify a cluster token (default: auto-generated) --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back --update-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') --switch # (implies --update-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context -v, --volume # specify additional bind-mounts --wait # enable waiting for all master nodes to be ready before returning -w, --workers # specify how many worker nodes you want to create node NODENAME # Create new nodes (and add them to existing clusters) -c, --cluster # specify the cluster that the node shall connect to -i, --image # specify which k3s image should be used for the node(s) --replicas # specify how many replicas you want to create with this spec --role # specify the node role --wait # wait for the node to be up and running before returning --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet delete cluster CLUSTERNAME # delete an existing cluster -a, --all # delete all existing clusters node NODENAME # delete an existing node -a, --all # delete all existing nodes start cluster CLUSTERNAME # start a (stopped) cluster -a, --all # start all clusters --wait # wait for all masters and master-loadbalancer to be up before returning --timeout # maximum waiting time for '--wait' before canceling/returning node NODENAME # start a (stopped) node stop cluster CLUSTERNAME # stop a cluster -a, --all # stop all clusters node # stop a node get cluster [ CLUSTERNAME [ CLUSTERNAME ... ]] --no-headers # do not print headers --token # show column with cluster tokens node NODENAME --no-headers # do not print headers kubeconfig ( CLUSTERNAME [ CLUSTERNAME ... ] | --all ) -a, --all # get kubeconfigs from all clusters --output # specify the output file where the kubeconfig should be written to --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents -s, --switch # switch current-context in kubeconfig to the new context -u, --update # update conflicting fields in existing kubeconfig (default: true) load image [ IMAGE | ARCHIVE [ IMAGE | ARCHIVE ... ]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters -c, --cluster # clusters to load the image into -k, --keep-tarball # do not delete the image tarball from the shared volume after completion completion SHELL # Generate completion scripts version # show k3d build version help [ COMMAND ] # show help text for any command","title":"Command Tree"},{"location":"usage/kubeconfig/","text":"Handling Kubeconfigs \u00b6 By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config ) Getting the kubeconfig for a newly created cluster \u00b6 Update your default kubeconfig upon cluster creation k3d create cluster mycluster --update-kubeconfig Note: this won\u2019t switch the current-context Update your default kubeconfig after cluster creation k3d get kubeconfig mycluster Note: this won\u2019t switch the current-context Update a different kubeconfig after cluster creation k3d get kubeconfig mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the get kubeconfig command by adding the --switch flag. Removing cluster details from the kubeconfig \u00b6 k3d delete cluster mycluster will always remove the details for mycluster from the default kubeconfig. Handling multiple clusters \u00b6 k3d get kubeconfig let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file, which is either the default kubeconfig or the kubeconfig specified via --output FILE . Note, that with multiple cluster specified, the --switch flag will change the current context to the cluster which was last in the list.","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#handling-kubeconfigs","text":"By default, k3d won\u2019t touch your kubeconfig without you telling it to do so. To get a kubeconfig set up for you to connect to a k3d cluster, you can go different ways. What is the default kubeconfig? We determine the path of the used or default kubeconfig in two ways: Using the KUBECONFIG environment variable, if it specifies exactly one file Using the default path (e.g. on Linux it\u2019s $HOME /.kube/config )","title":"Handling Kubeconfigs"},{"location":"usage/kubeconfig/#getting-the-kubeconfig-for-a-newly-created-cluster","text":"Update your default kubeconfig upon cluster creation k3d create cluster mycluster --update-kubeconfig Note: this won\u2019t switch the current-context Update your default kubeconfig after cluster creation k3d get kubeconfig mycluster Note: this won\u2019t switch the current-context Update a different kubeconfig after cluster creation k3d get kubeconfig mycluster --output some/other/file.yaml Note: this won\u2019t switch the current-context The file will be created if it doesn\u2019t exist Switching the current context None of the above options switch the current-context. This is intended to be least intrusive, since the current-context has a global effect. You can switch the current-context directly with the get kubeconfig command by adding the --switch flag.","title":"Getting the kubeconfig for a newly created cluster"},{"location":"usage/kubeconfig/#removing-cluster-details-from-the-kubeconfig","text":"k3d delete cluster mycluster will always remove the details for mycluster from the default kubeconfig.","title":"Removing cluster details from the kubeconfig"},{"location":"usage/kubeconfig/#handling-multiple-clusters","text":"k3d get kubeconfig let\u2019s you specify one or more clusters via arguments or all via --all . All kubeconfigs will then be merged into a single file, which is either the default kubeconfig or the kubeconfig specified via --output FILE . Note, that with multiple cluster specified, the --switch flag will change the current context to the cluster which was last in the list.","title":"Handling multiple clusters"},{"location":"usage/multimaster/","text":"Creating multi-master clusters \u00b6 Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 master nodes. Embedded dqlite \u00b6 Create a cluster with 3 master nodes using k3s\u2019 embedded dqlite database. The first master to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other master nodes. k3d create cluster multimaster --masters 3 Adding master nodes to a running cluster \u00b6 In theory (and also in practice in most cases), this is as easy as executing the following command: k3d create node newmaster --cluster multimaster --role master There\u2019s a trap! If your cluster was initially created with only a single master node, then this will fail. That\u2019s because the initial master node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Creating multi-master clusters"},{"location":"usage/multimaster/#creating-multi-master-clusters","text":"Important note For the best results (and less unexpected issues), choose 1, 3, 5, \u2026 master nodes.","title":"Creating multi-master clusters"},{"location":"usage/multimaster/#embedded-dqlite","text":"Create a cluster with 3 master nodes using k3s\u2019 embedded dqlite database. The first master to be created will use the --cluster-init flag and k3d will wait for it to be up and running before creating (and connecting) the other master nodes. k3d create cluster multimaster --masters 3","title":"Embedded dqlite"},{"location":"usage/multimaster/#adding-master-nodes-to-a-running-cluster","text":"In theory (and also in practice in most cases), this is as easy as executing the following command: k3d create node newmaster --cluster multimaster --role master There\u2019s a trap! If your cluster was initially created with only a single master node, then this will fail. That\u2019s because the initial master node was not started with the --cluster-init flag and thus is not using the dqlite backend.","title":"Adding master nodes to a running cluster"},{"location":"usage/guides/exposing_services/","text":"Exposing Services \u00b6 1. via Ingress \u00b6 In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d create cluster --api-port 6550 -p 8081 :80@loadbalancer --workers 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the masterlb that\u2019s deployed in front of a cluster\u2019s master nodes all ports exposed on the masterlb will be proxied to the same ports on all master nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d get-kubeconfig --name = 'k3s-default' ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / backend : serviceName : nginx servicePort : 80 Curl it via localhost curl localhost:8081/ 2. via NodePort \u00b6 Create a cluster, mapping the port 30080 from worker-0 to localhost:8082 k3d create cluster mycluster -p 8082 :30080@worker [ 0 ] --workers 2 Note: Kubernetes\u2019 default NodePort range is 30000-32767 \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#exposing-services","text":"","title":"Exposing Services"},{"location":"usage/guides/exposing_services/#1-via-ingress","text":"In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress. Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik ingress controller is listening on) is exposed on the host system. Create a cluster, mapping the ingress port 80 to localhost:8081 k3d create cluster --api-port 6550 -p 8081 :80@loadbalancer --workers 2 Good to know --api-port 6550 is not required for the example to work. It\u2019s used to have k3s \u2018s API-Server listening on port 6550 with that port mapped to the host system. the port-mapping construct 8081:80@loadbalancer means map port 8081 from the host to port 80 on the container which matches the nodefilter loadbalancer the loadbalancer nodefilter matches only the masterlb that\u2019s deployed in front of a cluster\u2019s master nodes all ports exposed on the masterlb will be proxied to the same ports on all master nodes in the cluster Get the kubeconfig file export KUBECONFIG = \" $( k3d get-kubeconfig --name = 'k3s-default' ) \" Create a nginx deployment kubectl create deployment nginx --image = nginx Create a ClusterIP service for it kubectl create service clusterip nginx --tcp = 80 :80 Create an ingress object for it with kubectl apply -f Note : k3s deploys traefik as the default ingress controller apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx annotations : ingress.kubernetes.io/ssl-redirect : \"false\" spec : rules : - http : paths : - path : / backend : serviceName : nginx servicePort : 80 Curl it via localhost curl localhost:8081/","title":"1. via Ingress"},{"location":"usage/guides/exposing_services/#2-via-nodeport","text":"Create a cluster, mapping the port 30080 from worker-0 to localhost:8082 k3d create cluster mycluster -p 8082 :30080@worker [ 0 ] --workers 2 Note: Kubernetes\u2019 default NodePort range is 30000-32767 \u2026 (Steps 2 and 3 like above) \u2026 Create a NodePort service for it with kubectl apply -f apiVersion : v1 kind : Service metadata : labels : app : nginx name : nginx spec : ports : - name : 80-80 nodePort : 30080 port : 80 protocol : TCP targetPort : 80 selector : app : nginx type : NodePort Curl it via localhost curl localhost:8082/","title":"2. via NodePort"},{"location":"usage/guides/registries/","text":"Registries \u00b6 Registries configuration file \u00b6 You can add registries by specifying them in a registries.yaml and mounting them at creation time: k3d create cluster mycluster --volume /home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml . This file is a regular k3s registries configuration file , and looks like this: mirrors : \"my.company.registry:5000\" : endpoint : - http://my.company.registry:5000 In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000 . Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates . Authenticated registries \u00b6 When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra Secure registries \u00b6 When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d create cluster --volume ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml --volume ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem Using a local registry \u00b6 Using the k3d registry \u00b6 Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x Using your own local registry \u00b6 You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry . Pushing to your local registry address \u00b6 As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1) Testing your registry \u00b6 You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-registry labels: app: nginx-test-registry spec: replicas: 1 selector: matchLabels: app: nginx-test-registry template: metadata: labels: app: nginx-test-registry spec: containers: - name: nginx-test-registry image: registry.localhost:5000/nginx:latest ports: - containerPort: 80 EOF Then you should check that the pod is running with kubectl get pods -l \"app=nginx-test-registry\" . Configuring registries for k3s <= v0.9.1 \u00b6 k3s servers below v0.9.1 do not recognize the registries.yaml file as described in the in the beginning, so you will need to embed the contents of that file in a containerd configuration file. You will have to create your own containerd configuration file at some well-known path like ${HOME}/.k3d/config.toml.tmpl , like this: # Original section: no changes [plugins.opt] path = \"{{ .NodeConfig.Containerd.Opt }}\" [plugins.cri] stream_server_address = \"{{ .NodeConfig.AgentConfig.NodeName }}\" stream_server_port = \"10010\" {{- if .IsRunningInUserNS }} disable_cgroup = true disable_apparmor = true restrict_oom_score_adj = true {{ end -}} {{- if .NodeConfig.AgentConfig.PauseImage }} sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\" {{ end -}} {{- if not .NodeConfig.NoFlannel }} [plugins.cri.cni] bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\" conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\" {{ end -}} # Added section: additional registries and the endpoints [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors.\" registry.localhost:5000 \"] endpoint = [\"http:// registry.localhost:5000 \"] and then mount it at /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl (where containerd in your k3d nodes will load it) when creating the k3d cluster: k3d create cluster mycluster \\ --volume ${ HOME } /.k3d/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl","title":"Registries"},{"location":"usage/guides/registries/#registries","text":"","title":"Registries"},{"location":"usage/guides/registries/#registries-configuration-file","text":"You can add registries by specifying them in a registries.yaml and mounting them at creation time: k3d create cluster mycluster --volume /home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml . This file is a regular k3s registries configuration file , and looks like this: mirrors : \"my.company.registry:5000\" : endpoint : - http://my.company.registry:5000 In this example, an image with a name like my.company.registry:5000/nginx:latest would be pulled from the registry running at http://my.company.registry:5000 . Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0 . It will fail silently with previous versions of k3s, but you find in the section below an alternative solution. This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates .","title":"Registries configuration file"},{"location":"usage/guides/registries/#authenticated-registries","text":"When using authenticated registries, we can add the username and password in a configs section in the registries.yaml , like this: mirrors : my.company.registry : endpoint : - http://my.company.registry configs : my.company.registry : auth : username : aladin password : abracadabra","title":"Authenticated registries"},{"location":"usage/guides/registries/#secure-registries","text":"When using secure registries, the registries.yaml file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry , you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem . Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs section in the registries.yaml file . For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem , the registries.yaml will look like: mirrors : my.company.registry : endpoint : - https://my.company.registry configs : my.company.registry : tls : # we will mount \"my-company-root.pem\" in the /etc/ssl/certs/ directory. ca_file : \"/etc/ssl/certs/my-company-root.pem\" Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file : k3d create cluster --volume ${ HOME } /.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml --volume ${ HOME } /.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem","title":"Secure registries"},{"location":"usage/guides/registries/#using-a-local-registry","text":"","title":"Using a local registry"},{"location":"usage/guides/registries/#using-the-k3d-registry","text":"Not ported yet The k3d-managed registry has not yet been ported from v1.x to v3.x","title":"Using the k3d registry"},{"location":"usage/guides/registries/#using-your-own-local-registry","text":"You can start your own local registry it with some docker commands, like: docker volume create local_registry docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000 :5000 registry:2 These commands will start your registry in registry.localhost:5000 . In order to push to this registry, you will need to make it accessible as described in the next section. Once your registry is up and running, we will need to add it to your registries.yaml configuration file. Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost . And then you can test your local registry .","title":"Using your own local registry"},{"location":"usage/guides/registries/#pushing-to-your-local-registry-address","text":"As per the guide above, the registry will be available at registry.localhost:5000 . All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host. Luckily (for Linux users), NSS-myhostname ships with many Linux distributions and should resolve *.localhost automatically to 127.0.0.1 . Otherwise, it\u2019s installable using sudo apt install libnss-myhostname . If it\u2019s not the case, you can add an entry in your /etc/hosts file like this: 127 .0.0.1 registry.localhost Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1)","title":"Pushing to your local registry address"},{"location":"usage/guides/registries/#testing-your-registry","text":"You should test that you can push to your registry from your local development machine. use images from that registry in Deployments in your k3d cluster. We will verify these two things for a local registry (located at registry.localhost:5000 ) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker\u2019s documentation for this). First, we can download some image (like nginx ) and push it to our local registry with: ```shell script docker pull nginx:latest docker tag nginx:latest registry.localhost:5000/nginx:latest docker push registry.localhost:5000/nginx:latest Then we can deploy a pod referencing this image to your cluster: ```shell script cat <<EOF | kubectl apply -f - apiVersion: apps/v1 kind: Deployment metadata: name: nginx-test-registry labels: app: nginx-test-registry spec: replicas: 1 selector: matchLabels: app: nginx-test-registry template: metadata: labels: app: nginx-test-registry spec: containers: - name: nginx-test-registry image: registry.localhost:5000/nginx:latest ports: - containerPort: 80 EOF Then you should check that the pod is running with kubectl get pods -l \"app=nginx-test-registry\" .","title":"Testing your registry"},{"location":"usage/guides/registries/#configuring-registries-for-k3s-v091","text":"k3s servers below v0.9.1 do not recognize the registries.yaml file as described in the in the beginning, so you will need to embed the contents of that file in a containerd configuration file. You will have to create your own containerd configuration file at some well-known path like ${HOME}/.k3d/config.toml.tmpl , like this: # Original section: no changes [plugins.opt] path = \"{{ .NodeConfig.Containerd.Opt }}\" [plugins.cri] stream_server_address = \"{{ .NodeConfig.AgentConfig.NodeName }}\" stream_server_port = \"10010\" {{- if .IsRunningInUserNS }} disable_cgroup = true disable_apparmor = true restrict_oom_score_adj = true {{ end -}} {{- if .NodeConfig.AgentConfig.PauseImage }} sandbox_image = \"{{ .NodeConfig.AgentConfig.PauseImage }}\" {{ end -}} {{- if not .NodeConfig.NoFlannel }} [plugins.cri.cni] bin_dir = \"{{ .NodeConfig.AgentConfig.CNIBinDir }}\" conf_dir = \"{{ .NodeConfig.AgentConfig.CNIConfDir }}\" {{ end -}} # Added section: additional registries and the endpoints [plugins.cri.registry.mirrors] [plugins.cri.registry.mirrors.\" registry.localhost:5000 \"] endpoint = [\"http:// registry.localhost:5000 \"] and then mount it at /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl (where containerd in your k3d nodes will load it) when creating the k3d cluster: k3d create cluster mycluster \\ --volume ${ HOME } /.k3d/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl","title":"Configuring registries for k3s &lt;= v0.9.1"}]}