diff --git a/404.html b/404.html deleted file mode 100644 index 55a72a5b..00000000 --- a/404.html +++ /dev/null @@ -1,898 +0,0 @@ - - - -
- - - - - - - - - - - - - - -k3d
on a system with btrfs, may need to mount /dev/mapper
into the nodes for the setup to work.k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper
k3s currently has no support for ZFS and thus, creating multi-server setups (e.g. k3d cluster create multiserver --servers 3
) fails, because the initializing server node (server flag --cluster-init
) errors out with the following log:
starting kubernetes: preparing server: start cluster and https: raft_init(): io: create I/O capabilities probe file: posix_allocate: operation not supported on socket
-
NodeHasDiskPressure
(collection of #119 and #130)change the kubelet’s eviction thresholds upon cluster creation:
-k3d cluster create \
- --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' \
- --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'
-
server-0
or stop/start the whole clusterkubectl
will give you a lot of errorsk3s
which doesn’t allow the initializing server node to go downk3d cluster create --k3s-server-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true'
--
) and where not.--kube-apiserver-arg
) has the dashesfeature-gates
doesn’t have them (k3s adds them internally)Second example:
-k3d cluster create k3d-one \
- --k3s-server-arg --cluster-cidr="10.118.0.0/17" \
- --k3s-server-arg --service-cidr="10.118.128.0/17" \
- --k3s-server-arg --disable=servicelb \
- --k3s-server-arg --disable=traefik \
- --verbose
-
"
and '
quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commandshost.k3d.internal
entry into the k3d containers (k3s nodes) and into the CoreDNS ConfigMap, enabling you to access your host system by referring to it as host.k3d.internal
Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue.
-Some can be fixed by passing the HTTP_PROXY
environment variables to k3d, some have to be fixed in docker’s daemon.json
file and some are as easy as adding a volume mount.
x509: certificate signed by unknown authority
¶Example Error Message:
-Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: x509: certificate signed by unknown authority
-
k3d cluster create --volume /path/to/your/certs.crt:/etc/ssl/certs/yourcert.crt
/proc
after deleting k3d
cluster with shared mounts¶grep k3d /proc/*/mountinfo
shows many spurious entriesno space left on device: unknown
when a pod is scheduled to the nodesxargs umount -l
and check for the diff o/p first)diff <(df -ha | grep pods | awk '{print $NF}') <(df -h | grep pods | awk '{print $NF}') | awk '{print $2}' | xargs umount -l
NotReady
state with log nf_conntrack_max: permission denied
¶<TIMESTAMP> F0516 05:05:31.782902 7 server.go:495] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied
kube-proxy
is not able to set the nf_conntrack_max
value anymoreWorkaround: as a workaround, we can tell kube-proxy
to not even try to set this value:
k3d cluster create \
- --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
- --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
- --image rancher/k3s:v1.20.6-k3s
-
This is going to be fixed “upstream” in k3s itself in rancher/k3s#3337 and backported to k3s versions as low as v1.18.
-You’re deploying something to the cluster using an image from DockerHub and the image fails to be pulled, with a 429
response code and a message saying You have reached your pull rate limit. You may increase the limit by authenticating and upgrading
.
This is caused by DockerHub’s pull rate limit (see https://docs.docker.com/docker-hub/download-rate-limit/), which limits pulls from unauthenticated/anonymous users to 100 pulls per hour and for authenticated users (not paying customers) to 200 pulls per hour (as of the time of writing).
-a) use images from a private registry, e.g. configured as a pull-through cache for DockerHub
-b) use a different public registry without such limitations, if the same image is stored there
-c) authenticate containerd inside k3s/k3d to use your DockerHub user
Create a registry configuration file for containerd:
-# saved as e.g. $HOME/registries.yaml
-configs:
- "docker.io":
- auth:
- username: "$USERNAME"
- password: "$PASSWORD"
-
Create a k3d cluster using that config:
-k3d cluster create --registry-config $HOME/registries.yaml
-
Profit. That’s it. In the test for this, we pulled the same image 120 times in a row (confirmed, that pull numbers went up), without being rate limited (as a non-paying, normal user)
---server
> 1 and no --datastore-x
option is set, the first server node (server-0) will be the initializing server node--cluster-init
flag appended--server https://<init-node>:6443
6443
) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)6443
of the loadbalancer is then mapped to a specific (--api-port
flag) or a random (default) port on the host system--kubeconfig-update-default
is set, we use the default loading rules to get the default kubeconfig:$HOME/.kube/config
)By default, k3d creates a new (docker) network for every new cluster.
-Using the --network STRING
flag upon creation to connect to an existing network.
-Existing networks won’t be managed by k3d together with the cluster lifecycle.
host
network¶When using the --network
flag to connect to the host network (i.e. k3d cluster create --network host
), you won’t be able to create more than one server node.
-An edge case would be one server node (with agent disabled) and one agent node.
bridge
network¶By default, every network that k3d creates is working in bridge
mode.
-But when you try to use --network bridge
to connect to docker’s internal bridge
network, you may run into issues with grabbing certificates from the API-Server.
-Single-Node clusters should work though.
none
“network”¶Well.. this doesn’t really make sense for k3d anyway ¯\_(ツ)_/¯
- - - - -On this page we’ll try to give an overview of all the moving bits and pieces in k3d to ease contributions to the project.
-.github/
cmd/
docgen/
docs/usage/commands/
docs/
pkg/
cmd/
that do non-trivial things are imported from hereproxy/
rancher/k3d-proxy
container image which is used as a loadbalancer/proxy in front of (almost) every k3d clustertests/
tools/
rancher/k3d-tools
container image which supports some k3d functionality like k3d image import
vendor/
go mod vendor
, which contains all dependencies of k3dversion/
go build
injects the version tags when building k3dk3d version
pkg/
actions/
client/
config/
SimpleConfig
and ClusterConfig
runtimes/
client/
eventually call runtime functions to “materialize” nodes and clusterstools/
-types/
Node
or a Cluster
in k3dutil/
By default, every k3d cluster consists of at least 2 containers (nodes):
-(optional, but default and strongly recommended) loadbalancer
-rancher/k3d-proxy
, built from proxy/
6443
(default listening port of K3s) to all the server nodes in the cluster(required, always present) primary server node
-rancher/k3s
, built from github.com/k3s-io/k3s
k3s server
--cluster-init
flag)(optional) secondary server node(s)
-rancher/k3s
, built from github.com/k3s-io/k3s
(optional) agent node(s)
-rancher/k3s
, built from github.com/k3s-io/k3s
k3s agent
The k3d repository mainly leverages the following two CI systems:
-push
events end up here (also does the releases, when a tag is pushed): https://drone-publish.rancher.io/rancher/k3dpr
s end up here: https://drone-pr.rancher.io/rancher/k3dThe website k3d.io containing all the documentation for k3d is built using mkdocs
, configured via the mkdocs.yml
config file with all the content residing in the docs/
directory (Markdown).
-Use mkdocs serve
in the repository root to build and serve the webpage locally.
-Some parts of the documentation are being auto-generated, like docs/usage/commands/
is auto-generated using Cobra’s command docs generation functionality in docgen/
.
k3d
- --verbose # GLOBAL: enable verbose (debug) logging (default: false)
- --trace # GLOBAL: enable super verbose logging (trace logging) (default: false)
- --version # show k3d and k3s version
- -h, --help # GLOBAL: show help text
-
- cluster [CLUSTERNAME] # default cluster name is 'k3s-default'
- create
- -a, --agents # specify how many agent nodes you want to create (integer, default: 0)
- --agents-memory # specify memory limit for agent containers/nodes (unit, e.g. 1g)
- --api-port # specify the port on which the cluster will be accessible (format '[HOST:]HOSTPORT', default: random)
- -c, --config # use a config file (format 'PATH')
- -e, --env # add environment variables to the nodes (quoted string, format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)
- --gpus # [from docker CLI] add GPU devices to the node containers (string, e.g. 'all')
- -i, --image # specify which k3s image should be used for the nodes (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)
- --k3s-agent-arg # add additional arguments to the k3s agent (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)
- --k3s-server-arg # add additional arguments to the k3s server (quoted string, use flag multiple times) (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help)
- --kubeconfig-switch-context # (implies --kubeconfig-update-default) automatically sets the current-context of your default kubeconfig to the new cluster's context (default: true)
- --kubeconfig-update-default # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true') (default: true)
- -l, --label # add (docker) labels to the node containers (format: 'KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]', use flag multiple times)
- --network # specify an existing (docker) network you want to connect to (string)
- --no-hostip # disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS (default: false)
- --no-image-volume # disable the creation of a volume for storing images (used for the 'k3d image import' command) (default: false)
- --no-lb # disable the creation of a load balancer in front of the server nodes (default: false)
- --no-rollback # disable the automatic rollback actions, if anything goes wrong (default: false)
- -p, --port # add some more port mappings (format: '[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]', use flag multiple times)
- --registry-create # create a new (docker) registry dedicated for this cluster (default: false)
- --registry-use # use an existing local (docker) registry with this cluster (string, use multiple times)
- -s, --servers # specify how many server nodes you want to create (integer, default: 1)
- --servers-memory # specify memory limit for server containers/nodes (unit, e.g. 1g)
- --token # specify a cluster token (string, default: auto-generated)
- --timeout # specify a timeout, after which the cluster creation will be interrupted and changes rolled back (duration, e.g. '10s')
- -v, --volume # specify additional bind-mounts (format: '[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]', use flag multiple times)
- --wait # enable waiting for all server nodes to be ready before returning (default: true)
- start CLUSTERNAME # start a (stopped) cluster
- -a, --all # start all clusters (default: false)
- --wait # wait for all servers and server-loadbalancer to be up before returning (default: true)
- --timeout # maximum waiting time for '--wait' before canceling/returning (duration, e.g. '10s')
- stop CLUSTERNAME # stop a cluster
- -a, --all # stop all clusters (default: false)
- delete CLUSTERNAME # delete an existing cluster
- -a, --all # delete all existing clusters (default: false)
- list [CLUSTERNAME [CLUSTERNAME ...]]
- --no-headers # do not print headers (default: false)
- --token # show column with cluster tokens (default: false)
- -o, --output # format the output (format: 'json|yaml')
- completion [bash | zsh | fish | (psh | powershell)] # generate completion scripts for common shells
- config
- init # write a default k3d config (as a starting point)
- -f, --force # force overwrite target file (default: false)
- -o, --output # file to write to (string, default "k3d-default.yaml")
- help [COMMAND] # show help text for any command
- image
- import [IMAGE | ARCHIVE [IMAGE | ARCHIVE ...]] # Load one or more images from the local runtime environment or tar-archives into k3d clusters
- -c, --cluster # clusters to load the image into (string, use flag multiple times, default: k3s-default)
- -k, --keep-tarball # do not delete the image tarball from the shared volume after completion (default: false)
- kubeconfig
- get (CLUSTERNAME [CLUSTERNAME ...] | --all) # get kubeconfig from cluster(s) and write it to stdout
- -a, --all # get kubeconfigs from all clusters (default: false)
- merge | write (CLUSTERNAME [CLUSTERNAME ...] | --all) # get kubeconfig from cluster(s) and merge it/them into a (kubeconfig-)file
- -a, --all # get kubeconfigs from all clusters (default: false)
- -s, --kubeconfig-switch-context # switch current-context in kubeconfig to the new context (default: true)
- -d, --kubeconfig-merge-default # update the default kubeconfig (usually $KUBECONFIG or $HOME/.kube/config)
- -o, --output # specify the output file where the kubeconfig should be written to (string)
- --overwrite # [Careful!] forcefully overwrite the output file, ignoring existing contents (default: false)
- -u, --update # update conflicting fields in existing kubeconfig (default: true)
- node
- create NODENAME # Create new nodes (and add them to existing clusters)
- -c, --cluster # specify the cluster that the node shall connect to (string, default: k3s-default)
- -i, --image # specify which k3s image should be used for the node(s) (string, default: 'docker.io/rancher/k3s:v1.20.0-k3s2', tag changes per build)
- --replicas # specify how many replicas you want to create with this spec (integer, default: 1)
- --role # specify the node role (string, format: 'agent|server', default: agent)
- --timeout # specify a timeout duration, after which the node creation will be interrupted, if not done yet (duration, e.g. '10s')
- --wait # wait for the node to be up and running before returning (default: true)
- start NODENAME # start a (stopped) node
- stop NODENAME # stop a node
- delete NODENAME # delete an existing node
- -a, --all # delete all existing nodes (default: false)
- -r, --registries # also delete registries, as a special type of node (default: false)
- list NODENAME
- --no-headers # do not print headers (default: false)
- registry
- create REGISTRYNAME
- -i, --image # specify image used for the registry (string, default: "docker.io/library/registry:2")
- -p, --port # select host port to map to (format: '[HOST:]HOSTPORT', default: 'random')
- delete REGISTRYNAME
- -a, --all # delete all existing registries (default: false)
- list [NAME [NAME...]]
- --no-headers # disable table headers (default: false)
- version # show k3d and k3s version
-
https://k3d.io/ -> Run k3s in Docker!
-https://k3d.io/ -k3d is a wrapper CLI that helps you to easily create k3s clusters inside docker. -Nodes of a k3d cluster are docker containers running a k3s image. -All Nodes of a k3d cluster are part of the same docker network.
-k3d [flags]
-
-h, --help help for k3d
- --timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
- --version Show k3d and default k3s version
-
Manage cluster(s)
-Manage cluster(s)
-k3d cluster [flags]
-
-h, --help help for cluster
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Create a new cluster
-Create a new k3s cluster with containerized nodes (k3s in docker). -Every cluster will consist of one or more containers: - - 1 (or more) server node container (k3s) - - (optionally) 1 loadbalancer container as the entrypoint to the cluster (nginx) - - (optionally) 1 (or more) agent node containers (k3s)
-k3d cluster create NAME [flags]
-
-a, --agents int Specify how many agents you want to create
- --agents-memory string Memory limit imposed on the agents nodes [From docker]
- --api-port [HOST:]HOSTPORT Specify the Kubernetes API server port exposed on the LoadBalancer (Format: [HOST:]HOSTPORT)
- - Example: `k3d cluster create --servers 3 --api-port 0.0.0.0:6550`
- -c, --config string Path of a config file to use
- -e, --env KEY[=VALUE][@NODEFILTER[;NODEFILTER...]] Add environment variables to nodes (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]
- - Example: `k3d cluster create --agents 2 -e "HTTP_PROXY=my.proxy.com@server[0]" -e "SOME_KEY=SOME_VAL@server[0]"`
- --gpus string GPU devices to add to the cluster node containers ('all' to pass all GPUs) [From docker]
- -h, --help help for create
- -i, --image string Specify k3s image that you want to use for the nodes
- --k3s-agent-arg k3s agent Additional args passed to the k3s agent command on agent nodes (new flag per arg)
- --k3s-server-arg k3s server Additional args passed to the k3s server command on server nodes (new flag per arg)
- --kubeconfig-switch-context Directly switch the default kubeconfig's current-context to the new cluster's context (requires --kubeconfig-update-default) (default true)
- --kubeconfig-update-default Directly update the default kubeconfig with the new cluster's context (default true)
- -l, --label KEY[=VALUE][@NODEFILTER[;NODEFILTER...]] Add label to node container (Format: KEY[=VALUE][@NODEFILTER[;NODEFILTER...]]
- - Example: `k3d cluster create --agents 2 -l "my.label@agent[0,1]" -l "other.label=somevalue@server[0]"`
- --network string Join an existing network
- --no-hostip Disable the automatic injection of the Host IP as 'host.k3d.internal' into the containers and CoreDNS
- --no-image-volume Disable the creation of a volume for importing images
- --no-lb Disable the creation of a LoadBalancer in front of the server nodes
- --no-rollback Disable the automatic rollback actions, if anything goes wrong
- -p, --port [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER] Map ports from the node containers to the host (Format: [HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER])
- - Example: `k3d cluster create --agents 2 -p 8080:80@agent[0] -p 8081@agent[1]`
- --registry-config string Specify path to an extra registries.yaml file
- --registry-create Create a k3d-managed registry and connect it to the cluster
- --registry-use stringArray Connect to one or more k3d-managed registries running locally
- -s, --servers int Specify how many servers you want to create
- --servers-memory string Memory limit imposed on the server nodes [From docker]
- --subnet 172.28.0.0/16 [Experimental: IPAM] Define a subnet for the newly created container network (Example: 172.28.0.0/16)
- --timeout duration Rollback changes if cluster couldn't be created in specified duration.
- --token string Specify a cluster token. By default, we generate one.
- -v, --volume [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]] Mount volumes into the nodes (Format: [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]
- - Example: `k3d cluster create --agents 2 -v /my/path@agent[0,1] -v /tmp/test:/tmp/other@server[0]`
- --wait Wait for the server(s) to be ready before returning. Use '--timeout DURATION' to not wait forever. (default true)
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Delete cluster(s).
-Delete cluster(s).
-k3d cluster delete [NAME [NAME ...] | --all] [flags]
-
-a, --all Delete all existing clusters
- -h, --help help for delete
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
List cluster(s)
-List cluster(s).
-k3d cluster list [NAME [NAME...]] [flags]
-
-h, --help help for list
- --no-headers Disable headers
- -o, --output string Output format. One of: json|yaml
- --token Print k3s cluster token
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Start existing k3d cluster(s)
-Start existing k3d cluster(s)
-k3d cluster start [NAME [NAME...] | --all] [flags]
-
-a, --all Start all existing clusters
- -h, --help help for start
- --timeout duration Maximum waiting time for '--wait' before canceling/returning.
- --wait Wait for the server(s) (and loadbalancer) to be ready before returning. (default true)
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Stop existing k3d cluster(s)
-Stop existing k3d cluster(s).
-k3d cluster stop [NAME [NAME...] | --all] [flags]
-
-a, --all Stop all existing clusters
- -h, --help help for stop
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Generate completion scripts for [bash, zsh, fish, powershell | psh]
-Generate completion scripts for [bash, zsh, fish, powershell | psh]
-k3d completion SHELL [flags]
-
-h, --help help for completion
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Work with config file(s)
-Work with config file(s)
-k3d config [flags]
-
-h, --help help for config
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
k3d config init [flags]
-
-f, --force Force overwrite of target file
- -h, --help help for init
- -o, --output string Write a default k3d config (default "k3d-default.yaml")
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Generate command docs
-k3d docgen [flags]
-
-h, --help help for docgen
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Handle container images.
-Handle container images.
-k3d image [flags]
-
-h, --help help for image
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Import image(s) from docker into k3d cluster(s).
-Import image(s) from docker into k3d cluster(s).
-If an IMAGE starts with the prefix ‘docker.io/’, then this prefix is stripped internally. -That is, ‘docker.io/rancher/k3d-tools:latest’ is treated as ‘rancher/k3d-tools:latest’.
-If an IMAGE starts with the prefix ‘library/’ (or ‘docker.io/library/’), then this prefix is stripped internally. -That is, ‘library/busybox:latest’ (or ‘docker.io/library/busybox:latest’) are treated as ‘busybox:latest’.
-If an IMAGE does not have a version tag, then ‘:latest’ is assumed. -That is, ‘rancher/k3d-tools’ is treated as ‘rancher/k3d-tools:latest’.
-A file ARCHIVE always takes precedence. -So if a file ‘./rancher/k3d-tools’ exists, k3d will try to import it instead of the IMAGE of the same name.
-k3d image import [IMAGE | ARCHIVE [IMAGE | ARCHIVE...]] [flags]
-
-c, --cluster stringArray Select clusters to load the image to. (default [k3s-default])
- -h, --help help for import
- -k, --keep-tarball Do not delete the tarball containing the saved images from the shared volume
- -t, --keep-tools Do not delete the tools node after import
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Manage kubeconfig(s)
-Manage kubeconfig(s)
-k3d kubeconfig [flags]
-
-h, --help help for kubeconfig
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Print kubeconfig(s) from cluster(s).
-Print kubeconfig(s) from cluster(s).
-k3d kubeconfig get [CLUSTER [CLUSTER [...]] | --all] [flags]
-
-a, --all Output kubeconfigs from all existing clusters
- -h, --help help for get
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Write/Merge kubeconfig(s) from cluster(s) into new or existing kubeconfig/file.
-Write/Merge kubeconfig(s) from cluster(s) into new or existing kubeconfig/file.
-k3d kubeconfig merge [CLUSTER [CLUSTER [...]] | --all] [flags]
-
-a, --all Get kubeconfigs from all existing clusters
- -h, --help help for merge
- -d, --kubeconfig-merge-default Merge into the default kubeconfig ($KUBECONFIG or /home/thklein/.kube/config)
- -s, --kubeconfig-switch-context Switch to new context (default true)
- -o, --output string Define output [ - | FILE ] (default from $KUBECONFIG or /home/thklein/.kube/config
- --overwrite [Careful!] Overwrite existing file, ignoring its contents
- -u, --update Update conflicting fields in existing kubeconfig (default true)
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Manage node(s)
-Manage node(s)
-k3d node [flags]
-
-h, --help help for node
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Create a new k3s node in docker
-Create a new containerized k3s node (k3s in docker).
-k3d node create NAME [flags]
-
-c, --cluster string Select the cluster that the node shall connect to. (default "k3s-default")
- -h, --help help for create
- -i, --image string Specify k3s image used for the node(s) (default "docker.io/rancher/k3s:v1.20.0-k3s2")
- --memory string Memory limit imposed on the node [From docker]
- --replicas int Number of replicas of this node specification. (default 1)
- --role string Specify node role [server, agent] (default "agent")
- --timeout duration Maximum waiting time for '--wait' before canceling/returning.
- --wait Wait for the node(s) to be ready before returning.
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Delete node(s).
-Delete node(s).
-k3d node delete (NAME | --all) [flags]
-
-a, --all Delete all existing nodes
- -h, --help help for delete
- -r, --registries Also delete registries
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
List node(s)
-List node(s).
-k3d node list [NAME [NAME...]] [flags]
-
-h, --help help for list
- --no-headers Disable headers
- -o, --output string Output format. One of: json|yaml
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Start an existing k3d node
-Start an existing k3d node.
-k3d node start NAME [flags]
-
-h, --help help for start
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Stop an existing k3d node
-Stop an existing k3d node.
-k3d node stop NAME [flags]
-
-h, --help help for stop
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Manage registry/registries
-Manage registry/registries
-k3d registry [flags]
-
-h, --help help for registry
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Create a new registry
-Create a new registry.
-k3d registry create NAME [flags]
-
-h, --help help for create
- -i, --image string Specify image used for the registry (default "docker.io/library/registry:2")
- --no-help Disable the help text (How-To use the registry)
- -p, --port [HOST:]HOSTPORT Select which port the registry should be listening on on your machine (localhost) (Format: [HOST:]HOSTPORT)
- - Example: `k3d registry create --port 0.0.0.0:5111` (default "random")
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Delete registry/registries.
-Delete registry/registries.
-k3d registry delete (NAME | --all) [flags]
-
-a, --all Delete all existing registries
- -h, --help help for delete
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
List registries
-List registries.
-k3d registry list [NAME [NAME...]] [flags]
-
-h, --help help for list
- --no-headers Disable headers
- -o, --output string Output format. One of: json|yaml
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
Show k3d and default k3s version
-Show k3d and default k3s version
-k3d version [flags]
-
-h, --help help for version
-
--timestamps Enable Log timestamps
- --trace Enable super verbose output (trace logging)
- --verbose Enable verbose output (debug logging)
-
As of k3d v4.0.0, released in January 2021, k3d ships with configuration file support for the k3d cluster create
command.
-This allows you to define all the things that you defined with CLI flags before in a nice and tidy YAML (as a Kubernetes user, we know you love it ;) ).
Syntax & Semantics
-The options defined in the config file are not 100% the same as the CLI flags.
-This concerns naming and style/usage/structure, e.g.
--api-port
is split up into a field named kubeAPI
that has 3 different “child fields” (host
, hostIP
and hostPort
)options.k3d
, where --no-rollback
is defined as options.k3d.disableRollback
--port
) are reflected as YAML listsUsing a config file is as easy as putting it in a well-known place in your file system and then referencing it via flag:
-k3d cluster create --config /home/me/my-awesome-config.yaml
(must be .yaml
/.yml
)k3d cluster create somename --config /home/me/my-awesome-config.yaml
k3d cluster create --config /home/me/my-awesome-config.yaml --volume '/some/path:/some:path@server[0]'
As of the time of writing this documentation, the config file only requires you to define two fields:
-apiVersion
to match the version of the config file that you want to use (at this time it would be apiVersion: k3d.io/v1alpha2
)kind
to define the kind of config file that you want to use (currently we only have the Simple
config)So this would be the minimal config file, which configures absolutely nothing:
-apiVersion: k3d.io/v1alpha2
-kind: Simple
-
The configuration options for k3d are continuously evolving and so is the config file (syntax) itself.
-Currently, the config file is still in an Alpha-State, meaning, that it is subject to change anytime (though we try to keep breaking changes low).
Validation via JSON-Schema
-k3d uses a JSON-Schema to describe the expected format and fields of the configuration file.
-This schema is also used to validate a user-given config file.
-This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. here for v1alpha2
) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.
Since the config options and the config file are changing quite a bit, it’s hard to keep track of all the supported config file settings, so here’s an example showing all of them as of the time of writing:
-# k3d configuration file, saved as e.g. /home/me/myk3dcluster.yaml
-apiVersion: k3d.io/v1alpha2 # this will change in the future as we make everything more stable
-kind: Simple # internally, we also have a Cluster config, which is not yet available externally
-name: mycluster # name that you want to give to your cluster (will still be prefixed with `k3d-`)
-servers: 1 # same as `--servers 1`
-agents: 2 # same as `--agents 2`
-kubeAPI: # same as `--api-port myhost.my.domain:6445` (where the name would resolve to 127.0.0.1)
- host: "myhost.my.domain" # important for the `server` setting in the kubeconfig
- hostIP: "127.0.0.1" # where the Kubernetes API will be listening on
- hostPort: "6445" # where the Kubernetes API listening port will be mapped to on your host system
-image: rancher/k3s:v1.20.4-k3s1 # same as `--image rancher/k3s:v1.20.4-k3s1`
-network: my-custom-net # same as `--network my-custom-net`
-token: superSecretToken # same as `--token superSecretToken`
-volumes: # repeatable flags are represented as YAML lists
- - volume: /my/host/path:/path/in/node # same as `--volume '/my/host/path:/path/in/node@server[0];agent[*]'`
- nodeFilters:
- - server[0]
- - agent[*]
-ports:
- - port: 8080:80 # same as `--port '8080:80@loadbalancer'`
- nodeFilters:
- - loadbalancer
-labels:
- - label: foo=bar # same as `--label 'foo=bar@agent[1]'`
- nodeFilters:
- - agent[1]
-env:
- - envVar: bar=baz # same as `--env 'bar=baz@server[0]'`
- nodeFilters:
- - server[0]
-registries: # define how registries should be created or used
- create: true # creates a default registry to be used with the cluster; same as `--registry-create`
- use:
- - k3d-myotherregistry:5000 # some other k3d-managed registry; same as `--registry-use 'k3d-myotherregistry:5000'`
- config: | # define contents of the `registries.yaml` file (or reference a file); same as `--registry-config /path/to/config.yaml`
- mirrors:
- "my.company.registry":
- endpoint:
- - http://my.company.registry:5000
-options:
- k3d: # k3d runtime settings
- wait: true # wait for cluster to be usable before returining; same as `--wait` (default: true)
- timeout: "60s" # wait timeout before aborting; same as `--timeout 60s`
- disableLoadbalancer: false # same as `--no-lb`
- disableImageVolume: false # same as `--no-image-volume`
- disableRollback: false # same as `--no-Rollback`
- disableHostIPInjection: false # same as `--no-hostip`
- k3s: # options passed on to K3s itself
- extraServerArgs: # additional arguments passed to the `k3s server` command; same as `--k3s-server-arg`
- - --tls-san=my.host.domain
- extraAgentArgs: [] # addditional arguments passed to the `k3s agent` command; same as `--k3s-agent-arg`
- kubeconfig:
- updateDefaultKubeconfig: true # add new cluster to your default Kubeconfig; same as `--kubeconfig-update-default` (default: true)
- switchCurrentContext: true # also set current-context to the new cluster's context; same as `--kubeconfig-switch-context` (default: true)
- runtime: # runtime (docker) specific options
- gpuRequest: all # same as `--gpus all`
-
k3d uses Cobra
and Viper
for CLI and general config handling respectively.
-This automatically introduces a “config option order of priority” (precedence order):
Config Precedence Order
-Source: spf13/viper#why-viper
---Internal Setting > CLI Flag > Environment Variable > Config File > (k/v store >) Defaults
-
This means, that you can define e.g. a “base configuration file” with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments.
-For example, you use the same config file to create three clusters which only have different names and kubeAPI
(--api-port
) settings.
The “Configuration as Code” Way
)If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel.
-You can following the documentation
-And then you have to change the ConfigMap calico-config
. On the cni_network_config
add the entry for allowing IP forwarding
"container_settings": {
- "allow_ip_forwarding": true
-}
-
Or you can directly use this calico.yaml manifest
-On the k3s cluster creation :
---flannel-backend=none
. For this, on k3d you need to forward this flag to k3s with the option --k3s-server-arg
.--volume
) the calico descriptor in the auto deploy manifest directory of k3s /var/lib/rancher/k3s/server/manifests/
So the command of the cluster creation is (when you are at root of the k3d repository)
-k3d cluster create "${clustername}" \
- --k3s-server-arg '--flannel-backend=none' \
- --volume "$(pwd)/docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml"
-
In this example :
-"${clustername}"
with the name of the cluster (or set a variable).$(pwd)/docs/usage/guides/calico.yaml
is the absolute path of the calico manifest, you can adapt it.You can add other options, see.
-The cluster will start without flannel and with Calico as CNI Plugin.
-For watching for the pod(s) deployment
-watch "kubectl get pods -n kube-system"
-
You will have something like this at beginning (with the command line kubectl get pods -n kube-system
)
NAME READY STATUS RESTARTS AGE
-helm-install-traefik-pn84f 0/1 Pending 0 3s
-calico-node-97rx8 0/1 Init:0/3 0 3s
-metrics-server-7566d596c8-hwnqq 0/1 Pending 0 2s
-calico-kube-controllers-58b656d69f-2z7cn 0/1 Pending 0 2s
-local-path-provisioner-6d59f47c7-rmswg 0/1 Pending 0 2s
-coredns-8655855d6-cxtnr 0/1 Pending 0 2s
-
And when it finish to start
-NAME READY STATUS RESTARTS AGE
-metrics-server-7566d596c8-hwnqq 1/1 Running 0 56s
-calico-node-97rx8 1/1 Running 0 57s
-helm-install-traefik-pn84f 0/1 Completed 1 57s
-svclb-traefik-lmjr5 2/2 Running 0 28s
-calico-kube-controllers-58b656d69f-2z7cn 1/1 Running 0 56s
-local-path-provisioner-6d59f47c7-rmswg 1/1 Running 0 56s
-traefik-758cd5fc85-x8p57 1/1 Running 0 28s
-coredns-8655855d6-cxtnr 1/1 Running 0 56s
-
Note :
-If you want to run CUDA workloads on the K3s container you need to customize the container.
-CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime.
-The K3s container itself also needs to run with this runtime.
-If you are using Docker you can install the NVIDIA Container Toolkit.
To get the NVIDIA container runtime in the K3s image you need to build your own K3s image.
-The native K3s image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet.
-To get around this we need to build the image with a supported base image.
ARG K3S_TAG="v1.21.2-k3s1"
-FROM rancher/k3s:$K3S_TAG as k3s
-
-FROM nvidia/cuda:11.2.0-base-ubuntu18.04
-
-ARG NVIDIA_CONTAINER_RUNTIME_VERSION
-ENV NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION
-
-RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
-
-RUN apt-get update && \
- apt-get -y install gnupg2 curl
-
-# Install NVIDIA Container Runtime
-RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | apt-key add -
-
-RUN curl -s -L https://nvidia.github.io/nvidia-container-runtime/ubuntu18.04/nvidia-container-runtime.list | tee /etc/apt/sources.list.d/nvidia-container-runtime.list
-
-RUN apt-get update && \
- apt-get -y install nvidia-container-runtime=${NVIDIA_CONTAINER_RUNTIME_VERSION}
-
-COPY --from=k3s / /
-
-RUN mkdir -p /etc && \
- echo 'hosts: files dns' > /etc/nsswitch.conf
-
-RUN chmod 1777 /tmp
-
-# Provide custom containerd configuration to configure the nvidia-container-runtime
-RUN mkdir -p /var/lib/rancher/k3s/agent/etc/containerd/
-
-COPY config.toml.tmpl /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
-
-# Deploy the nvidia driver plugin on startup
-RUN mkdir -p /var/lib/rancher/k3s/server/manifests
-
-COPY device-plugin-daemonset.yaml /var/lib/rancher/k3s/server/manifests/nvidia-device-plugin-daemonset.yaml
-
-VOLUME /var/lib/kubelet
-VOLUME /var/lib/rancher/k3s
-VOLUME /var/lib/cni
-VOLUME /var/log
-
-ENV PATH="$PATH:/bin/aux"
-
-ENTRYPOINT ["/bin/k3s"]
-CMD ["agent"]
-
This Dockerfile is based on the K3s Dockerfile -The following changes are applied:
-cuda:xx.x.x
must match the one you’re planning to use.config.toml
template to add the NVIDIA Container Runtime. This replaces the default runc
runtimeWe need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3s provides a way to do this using a config.toml.tmpl file. More information can be found on the K3s site.
-[plugins.opt]
- path = "{{ .NodeConfig.Containerd.Opt }}"
-
-[plugins.cri]
- stream_server_address = "127.0.0.1"
- stream_server_port = "10010"
-
-{{- if .IsRunningInUserNS }}
- disable_cgroup = true
- disable_apparmor = true
- restrict_oom_score_adj = true
-{{end}}
-
-{{- if .NodeConfig.AgentConfig.PauseImage }}
- sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
-{{end}}
-
-{{- if not .NodeConfig.NoFlannel }}
-[plugins.cri.cni]
- bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
- conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
-{{end}}
-
-[plugins.cri.containerd.runtimes.runc]
- # ---- changed from 'io.containerd.runc.v2' for GPU support
- runtime_type = "io.containerd.runtime.v1.linux"
-
-# ---- added for GPU support
-[plugins.linux]
- runtime = "nvidia-container-runtime"
-
-{{ if .PrivateRegistryConfig }}
-{{ if .PrivateRegistryConfig.Mirrors }}
-[plugins.cri.registry.mirrors]{{end}}
-{{range $k, $v := .PrivateRegistryConfig.Mirrors }}
-[plugins.cri.registry.mirrors."{{$k}}"]
- endpoint = [{{range $i, $j := $v.Endpoints}}{{if $i}}, {{end}}{{printf "%q" .}}{{end}}]
-{{end}}
-
-{{range $k, $v := .PrivateRegistryConfig.Configs }}
-{{ if $v.Auth }}
-[plugins.cri.registry.configs."{{$k}}".auth]
- {{ if $v.Auth.Username }}username = "{{ $v.Auth.Username }}"{{end}}
- {{ if $v.Auth.Password }}password = "{{ $v.Auth.Password }}"{{end}}
- {{ if $v.Auth.Auth }}auth = "{{ $v.Auth.Auth }}"{{end}}
- {{ if $v.Auth.IdentityToken }}identitytoken = "{{ $v.Auth.IdentityToken }}"{{end}}
-{{end}}
-{{ if $v.TLS }}
-[plugins.cri.registry.configs."{{$k}}".tls]
- {{ if $v.TLS.CAFile }}ca_file = "{{ $v.TLS.CAFile }}"{{end}}
- {{ if $v.TLS.CertFile }}cert_file = "{{ $v.TLS.CertFile }}"{{end}}
- {{ if $v.TLS.KeyFile }}key_file = "{{ $v.TLS.KeyFile }}"{{end}}
-{{end}}
-{{end}}
-{{end}}
-
To enable NVIDIA GPU support on Kubernetes you also need to install the NVIDIA device plugin. The device plugin is a deamonset and allows you to automatically:
-apiVersion: apps/v1
-kind: DaemonSet
-metadata:
- name: nvidia-device-plugin-daemonset
- namespace: kube-system
-spec:
- selector:
- matchLabels:
- name: nvidia-device-plugin-ds
- template:
- metadata:
- # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
- # reserves resources for critical add-on pods so that they can be rescheduled after
- # a failure. This annotation works in tandem with the toleration below.
- annotations:
- scheduler.alpha.kubernetes.io/critical-pod: ""
- labels:
- name: nvidia-device-plugin-ds
- spec:
- tolerations:
- # Allow this pod to be rescheduled while the node is in "critical add-ons only" mode.
- # This, along with the annotation above marks this pod as a critical add-on.
- - key: CriticalAddonsOnly
- operator: Exists
- containers:
- - env:
- - name: DP_DISABLE_HEALTHCHECKS
- value: xids
- image: nvidia/k8s-device-plugin:1.11
- name: nvidia-device-plugin-ctr
- securityContext:
- allowPrivilegeEscalation: true
- capabilities:
- drop: ["ALL"]
- volumeMounts:
- - name: device-plugin
- mountPath: /var/lib/kubelet/device-plugins
- volumes:
- - name: device-plugin
- hostPath:
- path: /var/lib/kubelet/device-plugins
-
To build the custom image we need to build K3s because we need the generated output.
-Put the following files in a directory:
- -The build.sh
script is configured using exports & defaults to v1.21.2+k3s1
. Please set at least the IMAGE_REGISTRY
variable! The script performs the following steps builds the custom K3s image including the nvidia drivers.
#!/bin/bash
-
-set -euxo pipefail
-
-K3S_TAG=${K3S_TAG:="v1.21.2-k3s1"} # replace + with -, if needed
-IMAGE_REGISTRY=${IMAGE_REGISTRY:="MY_REGISTRY"}
-IMAGE_REPOSITORY=${IMAGE_REPOSITORY:="rancher/k3s"}
-IMAGE_TAG="$K3S_TAG-cuda"
-IMAGE=${IMAGE:="$IMAGE_REGISTRY/$IMAGE_REPOSITORY:$IMAGE_TAG"}
-
-NVIDIA_CONTAINER_RUNTIME_VERSION=${NVIDIA_CONTAINER_RUNTIME_VERSION:="3.5.0-1"}
-
-echo "IMAGE=$IMAGE"
-
-# due to some unknown reason, copying symlinks fails with buildkit enabled
-DOCKER_BUILDKIT=0 docker build \
- --build-arg K3S_TAG=$K3S_TAG \
- --build-arg NVIDIA_CONTAINER_RUNTIME_VERSION=$NVIDIA_CONTAINER_RUNTIME_VERSION \
- -t $IMAGE .
-docker push $IMAGE
-echo "Done!"
-
You can use the image with k3d:
-k3d cluster create gputest --image=$IMAGE --gpus=1
-
Deploy a test pod:
-kubectl apply -f cuda-vector-add.yaml
-kubectl logs cuda-vector-add
-
This should output something like the following:
-$ kubectl logs cuda-vector-add
-
-[Vector addition of 50000 elements]
-Copy input data from the host memory to the CUDA device
-CUDA kernel launch with 196 blocks of 256 threads
-Copy output data from the CUDA device to the host memory
-Test PASSED
-Done
-
If the cuda-vector-add
pod is stuck in Pending
state, probably the device-driver daemonset didn’t get deployed correctly from the auto-deploy manifests. In that case, you can apply it manually via kubectl apply -f device-plugin-daemonset.yaml
.
Most of the information in this article was obtained from various sources:
- -In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
-Therefore, we have to create the cluster in a way, that the internal port 80 (where the traefik
ingress controller is listening on) is exposed on the host system.
Create a cluster, mapping the ingress port 80 to localhost:8081
-k3d cluster create --api-port 6550 -p "8081:80@loadbalancer" --agents 2
Good to know
---api-port 6550
is not required for the example to work.k3s
‘s API-Server listening on port 6550 with that port mapped to the host system.8081:80@loadbalancer
means:8081
from the host to port 80
on the container which matches the nodefilter loadbalancer
“loadbalancer
nodefilter matches only the serverlb
that’s deployed in front of a cluster’s server nodesserverlb
will be proxied to the same ports on all server nodes in the clusterGet the kubeconfig file (redundant, as k3d cluster create
already merges it into your default kubeconfig file)
export KUBECONFIG="$(k3d kubeconfig write k3s-default)"
Create a nginx deployment
-kubectl create deployment nginx --image=nginx
Create a ClusterIP service for it
-kubectl create service clusterip nginx --tcp=80:80
Create an ingress object for it by copying the following manifest to a file and applying with kubectl apply -f thatfile.yaml
Note: k3s
deploys traefik
as the default ingress controller
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
- name: nginx
- annotations:
- ingress.kubernetes.io/ssl-redirect: "false"
-spec:
- rules:
- - http:
- paths:
- - path: /
- pathType: Prefix
- backend:
- service:
- name: nginx
- port:
- number: 80
-
Curl it via localhost
-curl localhost:8081/
Create a cluster, mapping the port 30080
from agent-0
to localhost:8082
k3d cluster create mycluster -p "8082:30080@agent[0]" --agents 2
30000-32767
Note 2: You may as well expose the whole NodePort range from the very beginning, e.g. via k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"
(See this video from @portainer)
… (Steps 2 and 3 like above) …
-Create a NodePort service for it by copying the following manifest to a file and applying it with kubectl apply -f
apiVersion: v1
-kind: Service
-metadata:
- labels:
- app: nginx
- name: nginx
-spec:
- ports:
- - name: 80-80
- nodePort: 30080
- port: 80
- protocol: TCP
- targetPort: 80
- selector:
- app: nginx
- type: NodePort
-
Curl it via localhost
-curl localhost:8082/
You can add registries by specifying them in a registries.yaml
and referencing it at creation time:
-k3d cluster create mycluster --registry-config "/home/YOU/my-registries.yaml"
.
Before we added the --registry-config
flag in k3d v4.0.0, you had to bind-mount the file to the correct location: --volume "/home/YOU/my-registries.yaml:/etc/rancher/k3s/registries.yaml"
This file is a regular k3s registries configuration file, and looks like this:
-mirrors:
- "my.company.registry:5000":
- endpoint:
- - http://my.company.registry:5000
-
In this example, an image with a name like my.company.registry:5000/nginx:latest
would be
-pulled from the registry running at http://my.company.registry:5000
.
Note well there is an important limitation: this configuration file will only work with k3s >= v0.10.0. It will fail silently with previous versions of k3s, but you find in the section below an alternative solution.
-This file can also be used for providing additional information necessary for accessing some registries, like authentication and certificates.
-If you’re using a SimpleConfig
file to configure your k3d cluster, you may as well embed the registries.yaml in there directly:
apiVersion: k3d.io/v1alpha2
-kind: Simple
-name: test
-servers: 1
-agents: 2
-registries:
- create: true
- config: |
- mirrors:
- "my.company.registry":
- endpoint:
- - http://my.company.registry:5000
-
Here, the config for the k3d-managed registry, created by the create: true
flag will be merged with the config specified under config: |
.
When using authenticated registries, we can add the username and password in a
-configs
section in the registries.yaml
, like this:
mirrors:
- my.company.registry:
- endpoint:
- - http://my.company.registry
-
-configs:
- my.company.registry:
- auth:
- username: aladin
- password: abracadabra
-
When using secure registries, the registries.yaml
file must include information about the certificates. For example, if you want to use images from the secure registry running at https://my.company.registry
, you must first download a CA file valid for that server and store it in some well-known directory like ${HOME}/.k3d/my-company-root.pem
.
Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a configs
section in the registries.yaml
file.
-For example, if we mount the CA file in /etc/ssl/certs/my-company-root.pem
, the registries.yaml
will look like:
mirrors:
- my.company.registry:
- endpoint:
- - https://my.company.registry
-
-configs:
- my.company.registry:
- tls:
- # we will mount "my-company-root.pem" in the /etc/ssl/certs/ directory.
- ca_file: "/etc/ssl/certs/my-company-root.pem"
-
Finally, we can create the cluster, mounting the CA file in the path we specified in ca_file
:
k3d cluster create \
- --volume "${HOME}/.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml" \
- --volume "${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem"
-
Just ported!
-The k3d-managed registry is available again as of k3d v4.0.0 (January 2021)
-k3d cluster create mycluster --registry-create
: This creates your cluster mycluster
together with a registry container called k3d-mycluster-registry
registries.yaml
file)Check the k3d command output or docker ps -f name=k3d-mycluster-registry
to find the exposed port (let’s use 12345
here)
docker pull alpine:latest
, re-tag it to reference your newly created registry docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local
and push it docker push k3d-mycluster-registry:12345/testimage:local
kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null
(creates a container that will not do anything but keep on running)k3d registry create myregistry.localhost --port 12345
creates a new registry called k3d-myregistry.localhost
(could be used with automatic resolution of *.localhost
, see next section - also, note the k3d-
prefix that k3d adds to all resources it creates)k3d cluster create newcluster --registry-use k3d-myregistry.localhost:12345
(make sure you use the k3d-
prefix here) creates a new cluster set up to us that registryWe recommend using a k3d-managed registry, as it plays nicely together with k3d clusters, but here’s also a guide to create your own (not k3d-managed) registry, if you need features or customizations, that k3d does not provide:
-You can start your own local registry it with some docker
commands, like:
docker volume create local_registry
-docker container run -d --name registry.localhost -v local_registry:/var/lib/registry --restart always -p 5000:5000 registry:2
-
These commands will start your registry in registry.localhost:5000
. In order to push to this registry, you will need to make it accessible as described in the next section.
-Once your registry is up and running, we will need to add it to your registries.yaml
configuration file.
-Finally, you have to connect the registry network to the k3d cluster network: docker network connect k3d-k3s-default registry.localhost
. And then you can test your local registry.
As per the guide above, the registry will be available at registry.localhost:5000
.
All the nodes in your k3d cluster can resolve this hostname (thanks to the DNS server provided by the Docker daemon) but, in order to be able to push to this registry, this hostname also has to be resolved by your host.
-nss-myhostname to resolve *.localhost
Luckily (for Linux users), NSS-myhostname ships with many Linux distributions
-and should resolve *.localhost
automatically to 127.0.0.1
.
-Otherwise, it’s installable using sudo apt install libnss-myhostname
.
If your system does not provide/support tools that can auto-resolve specific names to 127.0.0.1
, you can manually add an entry in your /etc/hosts
(c:\windows\system32\drivers\etc\hosts
on Windows) file like this:
127.0.0.1 k3d-registry.localhost
-
Once again, this will only work with k3s >= v0.10.0 (see the some sections below when using k3s <= v0.9.1)
-You should test that you can
-Deployments
in your k3d cluster.We will verify these two things for a local registry (located at k3d-registry.localhost:12345
) running in your development machine.
-Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker’s documentation for this).
First, we can download some image (like nginx
) and push it to our local registry with:
docker pull nginx:latest
-docker tag nginx:latest k3d-registry.localhost:5000/nginx:latest
-docker push k3d-registry.localhost:5000/nginx:latest
-
Then we can deploy a pod referencing this image to your cluster:
-cat <<EOF | kubectl apply -f -
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: nginx-test-registry
- labels:
- app: nginx-test-registry
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: nginx-test-registry
- template:
- metadata:
- labels:
- app: nginx-test-registry
- spec:
- containers:
- - name: nginx-test-registry
- image: k3d-registry.localhost:12345/nginx:latest
- ports:
- - containerPort: 80
-EOF
-
Then you should check that the pod is running with kubectl get pods -l "app=nginx-test-registry"
.
k3s servers below v0.9.1 do not recognize the registries.yaml
file as described in the in the beginning, so you will need to embed the contents of that file in a containerd
configuration file.
-You will have to create your own containerd
configuration file at some well-known path like ${HOME}/.k3d/config.toml.tmpl
, like this:
# Original section: no changes
-[plugins.opt]
-path = "{{ .NodeConfig.Containerd.Opt }}"
-[plugins.cri]
-stream_server_address = "{{ .NodeConfig.AgentConfig.NodeName }}"
-stream_server_port = "10010"
-{{- if .IsRunningInUserNS }}
-disable_cgroup = true
-disable_apparmor = true
-restrict_oom_score_adj = true
-{{ end -}}
-{{- if .NodeConfig.AgentConfig.PauseImage }}
-sandbox_image = "{{ .NodeConfig.AgentConfig.PauseImage }}"
-{{ end -}}
-{{- if not .NodeConfig.NoFlannel }}
- [plugins.cri.cni]
- bin_dir = "{{ .NodeConfig.AgentConfig.CNIBinDir }}"
- conf_dir = "{{ .NodeConfig.AgentConfig.CNIConfDir }}"
-{{ end -}}
-
-# Added section: additional registries and the endpoints
-[plugins.cri.registry.mirrors]
- [plugins.cri.registry.mirrors."<b>registry.localhost:5000</b>"]
- endpoint = ["http://<b>registry.localhost:5000</b>"]
-
and then mount it at /var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
(where containerd
in your k3d nodes will load it) when creating the k3d cluster:
k3d cluster create mycluster \
- --volume ${HOME}/.k3d/config.toml.tmpl:/var/lib/rancher/k3s/agent/etc/containerd/config.toml.tmpl
-
By default, k3d will update your default kubeconfig with your new cluster’s details and set the current-context to it (can be disabled).
-To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways.
We determine the path of the used or default kubeconfig in two ways:
-KUBECONFIG
environment variable, if it specifies exactly one file$HOME/.kube/config
)Create a new kubeconfig file after cluster creation
-k3d kubeconfig write mycluster
$HOME/.k3d/kubeconfig-mycluster.yaml
export KUBECONFIG=$(k3d kubeconfig write mycluster)
k3d kubeconfig get mycluster > some-file.yaml
Update your default kubeconfig upon cluster creation (DEFAULT)
-k3d cluster create mycluster --kubeconfig-update-default
--kubeconfig-switch-context
to do so)Update your default kubeconfig after cluster creation
-k3d kubeconfig merge mycluster --kubeconfig-merge-default
--kubeconfig-switch-context
to do so)Update a different kubeconfig after cluster creation
-k3d kubeconfig merge mycluster --output some/other/file.yaml
Switching the current context
-None of the above options switch the current-context by default.
-This is intended to be least intrusive, since the current-context has a global effect.
-You can switch the current-context directly with the kubeconfig merge
command by adding the --kubeconfig-switch-context
flag.
k3d cluster delete mycluster
will always remove the details for mycluster
from the default kubeconfig.
-It will also delete the respective kubeconfig file in $HOME/.k3d/
if it exists.
k3d kubeconfig merge
let’s you specify one or more clusters via arguments or all via --all
.
-All kubeconfigs will then be merged into a single file if --kubeconfig-merge-default
or --output
is specified.
-If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. $HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml
) will be returned.
-Note, that with multiple cluster specified, the --kubeconfig-switch-context
flag will change the current context to the cluster which was last in the list.
Important note
-For the best results (and less unexpected issues), choose 1, 3, 5, … server nodes. -At least 2 cores and 4GiB of RAM are recommended.
-Create a cluster with 3 server nodes using k3s’ embedded etcd (old: dqlite) database.
-The first server to be created will use the --cluster-init
flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes.
k3d cluster create multiserver --servers 3
-
In theory (and also in practice in most cases), this is as easy as executing the following command:
-k3d node create newserver --cluster multiserver --role server
-
There’s a trap!
-If your cluster was initially created with only a single server node, then this will fail.
-That’s because the initial server node was not started with the --cluster-init
flag and thus is not using the etcd (old: dqlite) backend.