cmd.Flags().StringP("api-port","a","random","Specify the Kubernetes API server port exposed on the LoadBalancer (Format: `--api-port [HOST:]HOSTPORT`)\n - Example: `k3d create -m 3 -a 0.0.0.0:6550`")
cmd.Flags().IntP("masters","m",1,"Specify how many masters you want to create")
cmd.Flags().IntP("workers","w",0,"Specify how many workers you want to create")
cmd.Flags().String("api-port","random","Specify the Kubernetes API server port exposed on the LoadBalancer (Format: `--api-port [HOST:]HOSTPORT`)\n - Example: `k3d create -m 3 -a 0.0.0.0:6550`")
cmd.Flags().IntP("servers","s",1,"Specify how many servers you want to create")
cmd.Flags().IntP("agents","a",0,"Specify how many agents you want to create")
cmd.Flags().StringP("image","i",fmt.Sprintf("%s:%s",k3d.DefaultK3sImageRepo,version.GetK3sVersion(false)),"Specify k3s image that you want to use for the nodes")
cmd.Flags().String("network","","Join an existing network")
cmd.Flags().String("token","","Specify a cluster token. By default, we generate one.")
cmd.Flags().StringArrayP("volume","v",nil,"Mount volumes into the nodes (Format: `--volume [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]`\n - Example: `k3d create -w 2 -v /my/path@worker[0,1] -v /tmp/test:/tmp/other@master[0]`")
cmd.Flags().StringArrayP("port","p",nil,"Map ports from the node containers to the host (Format: `[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]`)\n - Example: `k3d create -w 2 -p 8080:80@worker[0] -p 8081@worker[1]`")
cmd.Flags().BoolVar(&createClusterOpts.WaitForMaster,"wait",true,"Wait for the master(s) to be ready before returning. Use '--timeout DURATION' to not wait forever.")
cmd.Flags().StringArrayP("volume","v",nil,"Mount volumes into the nodes (Format: `--volume [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]`\n - Example: `k3d create -w 2 -v /my/path@agent[0,1] -v /tmp/test:/tmp/other@server[0]`")
cmd.Flags().StringArrayP("port","p",nil,"Map ports from the node containers to the host (Format: `[HOST:][HOSTPORT:]CONTAINERPORT[/PROTOCOL][@NODEFILTER]`)\n - Example: `k3d create -w 2 -p 8080:80@agent[0] -p 8081@agent[1]`")
cmd.Flags().BoolVar(&createClusterOpts.WaitForServer,"wait",true,"Wait for the server(s) to be ready before returning. Use '--timeout DURATION' to not wait forever.")
cmd.Flags().DurationVar(&createClusterOpts.Timeout,"timeout",0*time.Second,"Rollback changes if cluster couldn't be created in specified duration.")
cmd.Flags().BoolVar(&updateDefaultKubeconfig,"update-default-kubeconfig",true,"Directly update the default kubeconfig with the new cluster's context")
cmd.Flags().BoolVar(&updateCurrentContext,"switch-context",true,"Directly switch the default kubeconfig's current-context to the new cluster's context (implies --update-default-kubeconfig)")
cmd.Flags().BoolVar(&createClusterOpts.DisableLoadBalancer,"no-lb",false,"Disable the creation of a LoadBalancer in front of the master nodes")
cmd.Flags().BoolVar(&createClusterOpts.DisableLoadBalancer,"no-lb",false,"Disable the creation of a LoadBalancer in front of the server nodes")
/* Image Importing */
cmd.Flags().BoolVar(&createClusterOpts.DisableImageVolume,"no-image-volume",false,"Disable the creation of a volume for importing images")
/* Multi Master Configuration */
/* Multi Server Configuration */
// multi-master - datastore
// TODO: implement multi-master setups with external data store
// cmd.Flags().String("datastore-endpoint", "", "[WIP] Specify external datastore endpoint (e.g. for multi master clusters)")
// multi-server - datastore
// TODO: implement multi-server setups with external data store
// cmd.Flags().String("datastore-endpoint", "", "[WIP] Specify external datastore endpoint (e.g. for multi server clusters)")
/*
cmd.Flags().String("datastore-network","","Specify container network where we can find the datastore-endpoint (add a connection)")
cmd.Flags().StringArrayVar(&createClusterOpts.K3sServerArgs,"k3s-server-arg",nil,"Additional args passed to the `k3s server` command on master nodes (new flag per arg)")
cmd.Flags().StringArrayVar(&createClusterOpts.K3sAgentArgs,"k3s-agent-arg",nil,"Additional args passed to the `k3s agent` command on worker nodes (new flag per arg)")
cmd.Flags().StringArrayVar(&createClusterOpts.K3sServerArgs,"k3s-server-arg",nil,"Additional args passed to the `k3s server` command on server nodes (new flag per arg)")
cmd.Flags().StringArrayVar(&createClusterOpts.K3sAgentArgs,"k3s-agent-arg",nil,"Additional args passed to the `k3s agent` command on agent nodes (new flag per arg)")
- k3s currently has [no support for ZFS](ttps://github.com/rancher/k3s/issues/66) and thus, creating multi-master setups (e.g. `k3d cluster create multimaster --masters 3`) fails, because the initializing master node (server flag `--cluster-init`) errors out with the following log:
- k3s currently has [no support for ZFS](ttps://github.com/rancher/k3s/issues/66) and thus, creating multi-server setups (e.g. `k3d cluster create multiserver --servers 3`) fails, because the initializing server node (server flag `--cluster-init`) errors out with the following log:
```bash
starting kubernetes: preparing server: start cluster and https: raft_init(): io: create I/O capabilities probe file: posix_allocate: operation not supported on socket
```
@ -25,10 +25,10 @@
- clean up or expand docker root filesystem
- change the kubelet's eviction thresholds upon cluster creation: `k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'`
## Restarting a multi-master cluster or the initializing master node fails
## Restarting a multi-server cluster or the initializing server node fails
- What you do: You create a cluster with more than one master node and later, you either stop `master-0` or stop/start the whole cluster
- What you do: You create a cluster with more than one server node and later, you either stop `server-0` or stop/start the whole cluster
- What fails: After the restart, you cannot connect to the cluster anymore and `kubectl` will give you a lot of errors
- What causes this issue: it's a [known issue with dqlite in `k3s`](https://github.com/rancher/k3s/issues/1391) which doesn't allow the initializing master node to go down
- What causes this issue: it's a [known issue with dqlite in `k3s`](https://github.com/rancher/k3s/issues/1391) which doesn't allow the initializing server node to go down
- What's the solution: Hopefully, this will be solved by the planned [replacement of dqlite with embedded etcd in k3s](https://github.com/rancher/k3s/pull/1770)
- Related issues: [#262](https://github.com/rancher/k3d/issues/262)
-i, --image # specify which k3s image should be used for the nodes
--k3s-agent-arg # add additional arguments to the k3s agent (see https://rancher.com/docs/k3s/latest/en/installation/install-options/agent-config/#k3s-agent-cli-help)
--k3s-server-arg # add additional arguments to the k3s server (see https://rancher.com/docs/k3s/latest/en/installation/install-options/server-config/#k3s-server-cli-help)
-m, --masters # specify how many master nodes you want to create
-m, --servers # specify how many server nodes you want to create
--network # specify a network you want to connect to
--no-image-volume # disable the creation of a volume for storing images (used for the 'k3d load image' command)
-p, --port # add some more port mappings
@ -23,11 +23,11 @@ k3d
--update-default-kubeconfig # enable the automated update of the default kubeconfig with the details of the newly created cluster (also sets '--wait=true')
--switch-context # (implies --update-default-kubeconfig) automatically sets the current-context of your default kubeconfig to the new cluster's context
-v, --volume # specify additional bind-mounts
--wait # enable waiting for all master nodes to be ready before returning
-w, --workers # specify how many worker nodes you want to create
--wait # enable waiting for all server nodes to be ready before returning
-a, --agents # specify how many agent nodes you want to create
start CLUSTERNAME # start a (stopped) cluster
-a, --all # start all clusters
--wait # wait for all masters and master-loadbalancer to be up before returning
--wait # wait for all servers and server-loadbalancer to be up before returning
--timeout # maximum waiting time for '--wait' before canceling/returning
- `--api-port 6550` is not required for the example to work. It's used to have `k3s`'s API-Server listening on port 6550 with that port mapped to the host system.
- the port-mapping construct `8081:80@loadbalancer` means
- map port `8081` from the host to port `80` on the container which matches the nodefilter `loadbalancer`
- the `loadbalancer` nodefilter matches only the `masterlb` that's deployed in front of a cluster's master nodes
- all ports exposed on the `masterlb` will be proxied to the same ports on all master nodes in the cluster
- the `loadbalancer` nodefilter matches only the `serverlb` that's deployed in front of a cluster's server nodes
- all ports exposed on the `serverlb` will be proxied to the same ports on all server nodes in the cluster
2. Get the kubeconfig file
@ -54,9 +54,9 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
## 2. via NodePort
1. Create a cluster, mapping the port 30080 from worker-0 to localhost:8082
1. Create a cluster, mapping the port 30080 from agent-0 to localhost:8082
For the best results (and less unexpected issues), choose 1, 3, 5, ... master nodes.
For the best results (and less unexpected issues), choose 1, 3, 5, ... server nodes.
## Embedded dqlite
Create a cluster with 3 master nodes using k3s' embedded dqlite database.
The first master to be created will use the `--cluster-init` flag and k3d will wait for it to be up and running before creating (and connecting) the other master nodes.
Create a cluster with 3 server nodes using k3s' embedded dqlite database.
The first server to be created will use the `--cluster-init` flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes.
```bash
k3d cluster create multimaster --masters 3
k3d cluster create multiserver --servers 3
```
## Adding master nodes to a running cluster
## Adding server nodes to a running cluster
In theory (and also in practice in most cases), this is as easy as executing the following command: