docs: cleanup, fix formatting, etc.

pull/626/head^2
iwilltry42 3 years ago
parent e4cefaab27
commit e98ff9a964
No known key found for this signature in database
GPG Key ID: 7BA57AD1CFF16110
  1. 3
      docs/faq/.pages
  2. 47
      docs/faq/faq.md
  3. 63
      docs/faq/v1vsv3-comparison.md
  4. 8
      docs/index.md
  5. 33
      docs/internals/defaults.md
  6. 13
      docs/internals/networking.md
  7. 21
      docs/usage/configfile.md
  8. 44
      docs/usage/guides/calico.md
  9. 15
      docs/usage/guides/cuda.md
  10. 31
      docs/usage/guides/exposing_services.md
  11. 20
      docs/usage/guides/registries.md
  12. 41
      docs/usage/kubeconfig.md
  13. 14
      docs/usage/multiserver.md

@ -1,4 +1,3 @@
title: FAQ
nav:
- faq.md
- v1vsv3-comparison.md
- faq.md

@ -3,11 +3,11 @@
## Issues with BTRFS
- As [@jaredallard](https://github.com/jaredallard) [pointed out](https://github.com/rancher/k3d/pull/48), people running `k3d` on a system with **btrfs**, may need to mount `/dev/mapper` into the nodes for the setup to work.
- This will do: `k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper`
- This will do: `#!bash k3d cluster create CLUSTER_NAME -v /dev/mapper:/dev/mapper`
## Issues with ZFS
- k3s currently has [no support for ZFS](https://github.com/rancher/k3s/issues/66) and thus, creating multi-server setups (e.g. `k3d cluster create multiserver --servers 3`) fails, because the initializing server node (server flag `--cluster-init`) errors out with the following log:
- k3s currently has [no support for ZFS](https://github.com/rancher/k3s/issues/66) and thus, creating multi-server setups (e.g. `#!bash k3d cluster create multiserver --servers 3`) fails, because the initializing server node (server flag `--cluster-init`) errors out with the following log:
```bash
starting kubernetes: preparing server: start cluster and https: raft_init(): io: create I/O capabilities probe file: posix_allocate: operation not supported on socket
@ -24,7 +24,13 @@
- Possible [fix/workaround by @zer0def](https://github.com/rancher/k3d/issues/133#issuecomment-549065666):
- use a docker storage driver which cleans up properly (e.g. overlay2)
- clean up or expand docker root filesystem
- change the kubelet's eviction thresholds upon cluster creation: `k3d cluster create --k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' --k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'`
- change the kubelet's eviction thresholds upon cluster creation:
```bash
k3d cluster create \
--k3s-agent-arg '--kubelet-arg=eviction-hard=imagefs.available<1%,nodefs.available<1%' \
--k3s-agent-arg '--kubelet-arg=eviction-minimum-reclaim=imagefs.available=1%,nodefs.available=1%'
```
## Restarting a multi-server cluster or the initializing server node fails
@ -39,12 +45,22 @@
- The Problem: Passing a feature flag to the Kubernetes API Server running inside k3s.
- Example: you want to enable the EphemeralContainers feature flag in Kubernetes
- Solution: `#!bash k3d cluster create --k3s-server-arg '--kube-apiserver-arg=feature-gates=EphemeralContainers=true'`
- Note: Be aware of where the flags require dashes (`--`) and where not.
- **Note**: Be aware of where the flags require dashes (`--`) and where not.
- the k3s flag (`--kube-apiserver-arg`) has the dashes
- the kube-apiserver flag `feature-gates` doesn't have them (k3s adds them internally)
- Second example: `#!bash k3d cluster create k3d-one --k3s-server-arg --cluster-cidr="10.118.0.0/17" --k3s-server-arg --service-cidr="10.118.128.0/17" --k3s-server-arg --disable=servicelb --k3s-server-arg --disable=traefik --verbose`
- Note: There are many ways to use the `"` and `'` quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands
- Second example:
```bash
k3d cluster create k3d-one \
--k3s-server-arg --cluster-cidr="10.118.0.0/17" \
--k3s-server-arg --service-cidr="10.118.128.0/17" \
--k3s-server-arg --disable=servicelb \
--k3s-server-arg --disable=traefik \
--verbose
```
- **Note**: There are many ways to use the `"` and `'` quotes, just be aware, that sometimes shells also try to interpret/interpolate parts of the commands
## How to access services (like a database) running on my Docker Host Machine
@ -52,12 +68,17 @@
## Running behind a corporate proxy
Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue.
Running k3d behind a corporate proxy can lead to some issues with k3d that have already been reported in more than one issue.
Some can be fixed by passing the `HTTP_PROXY` environment variables to k3d, some have to be fixed in docker's `daemon.json` file and some are as easy as adding a volume mount.
### Pods fail to start: `x509: certificate signed by unknown authority`
- Example Error Message: `Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: x509: certificate signed by unknown authority`
- Example Error Message:
```bash
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "docker.io/rancher/pause:3.1": failed to pull image "docker.io/rancher/pause:3.1": failed to pull and unpack image "docker.io/rancher/pause:3.1": failed to resolve reference "docker.io/rancher/pause:3.1": failed to do request: Head https://registry-1.docker.io/v2/rancher/pause/manifests/3.1: x509: certificate signed by unknown authority
```
- Problem: inside the container, the certificate of the corporate proxy cannot be validated
- Possible Solution: Mounting the CA Certificate from your host into the node containers at start time via `k3d cluster create --volume /path/to/your/certs.crt:/etc/ssl/certs/yourcert.crt`
- Issue: [rancher/k3d#535](https://github.com/rancher/k3d/discussions/535#discussioncomment-474982)
@ -75,6 +96,14 @@ Some can be fixed by passing the `HTTP_PROXY` environment variables to k3d, some
- When: This happens when running k3d on a Linux system with a kernel version >= 5.12.2 (and others like >= 5.11.19) when creating a new cluster
- the node(s) stop or get stuck with a log line like this: `<TIMESTAMP> F0516 05:05:31.782902 7 server.go:495] open /proc/sys/net/netfilter/nf_conntrack_max: permission denied`
- Why: The issue was introduced by a change in the Linux kernel ([Changelog 5.12.2](https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.12.2): [Commit](https://github.com/torvalds/linux/commit/671c54ea8c7ff47bd88444f3fffb65bf9799ce43)), that changed the netfilter_conntrack behavior in a way that `kube-proxy` is not able to set the `nf_conntrack_max` value anymore
- Workaround: as a workaround, we can tell `kube-proxy` to not even try to set this value: `k3d cluster create --k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" --k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" --image rancher/k3s:v1.20.6-k3s`
- Workaround: as a workaround, we can tell `kube-proxy` to not even try to set this value:
```bash
k3d cluster create \
--k3s-server-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
--k3s-agent-arg "--kube-proxy-arg=conntrack-max-per-core=0" \
--image rancher/k3s:v1.20.6-k3s
```
- Fix: This is going to be fixed "upstream" in k3s itself in [rancher/k3s#3337](https://github.com/k3s-io/k3s/pull/3337) and backported to k3s versions as low as v1.18.
- Issue Reference: [rancher/k3s#607](https://github.com/rancher/k3d/issues/607)

@ -1,63 +0,0 @@
# Feature Comparison: v1 vs. v3
## v1.x feature -> implementation in v3
```text
- k3d
- check-tools -> won't do
- shell -> planned: `k3d shell CLUSTER`
- --name -> planned: drop (now as arg)
- --command -> planned: keep
- --shell -> planned: keep (or second arg)
- auto, bash, zsh
- create -> `k3d cluster create CLUSTERNAME`
- --name -> dropped, implemented via arg
- --volume -> implemented
- --port -> implemented
- --port-auto-offset -> TBD
- --api-port -> implemented
- --wait -> implemented
- --image -> implemented
- --server-arg -> implemented as `--k3s-server-arg`
- --agent-arg -> implemented as `--k3s-agent-arg`
- --env -> planned
- --label -> planned
- --workers -> implemented
- --auto-restart -> dropped (docker's `unless-stopped` is set by default)
- --enable-registry -> coming in v4.0.0 (2021) as `--registry-create` and `--registry-use`
- --registry-name -> TBD
- --registry-port -> TBD
- --registry-volume -> TBD
- --registries-file -> TBD
- --enable-registry-cache -> TBD
- (add-node) -> `k3d node create NODENAME`
- --role -> implemented
- --name -> dropped, implemented as arg
- --count -> implemented as `--replicas`
- --image -> implemented
- --arg -> planned
- --env -> planned
- --volume -> planned
- --k3s -> TBD
- --k3s-secret -> TBD
- --k3s-token -> TBD
- delete -> `k3d cluster delete CLUSTERNAME`
- --name -> dropped, implemented as arg
- --all -> implemented
- --prune -> TBD
- --keep-registry-volume -> TBD
- stop -> `k3d cluster stop CLUSTERNAME`
- --name -> dropped, implemented as arg
- --all -> implemented
- start -> `k3d cluster start CLUSTERNAME`
- --name -> dropped, implemented as arg
- --all -> implemented
- list -> dropped, implemented as `k3d get clusters`
- get-kubeconfig -> `k3d kubeconfig get|merge CLUSTERNAME`
- --name -> dropped, implemented as arg
- --all -> implemented
- --overwrite -> implemented
- import-images -> `k3d image import [--cluster CLUSTERNAME] [--keep] IMAGES`
- --name -> implemented as `--cluster`
- --no-remove -> implemented as `--keep-tarball`
```

@ -16,10 +16,10 @@ k3d makes it very easy to create single- and multi-node [k3s](https://github.com
!!! Tip "k3d demo repository: [iwilltry42/k3d-demo](https://github.com/iwilltry42/k3d-demo)"
Featured use-cases include:
- **hot-reloading** of code when developing on k3d (Python Flask App)
- build-deploy-test cycle using **Tilt**
- full cluster lifecycle for simple and **multi-server** clusters
- Proof of Concept of using k3d as a service in **Drone CI**
- **hot-reloading** of code when developing on k3d (Python Flask App)
- build-deploy-test cycle using **Tilt**
- full cluster lifecycle for simple and **multi-server** clusters
- Proof of Concept of using k3d as a service in **Drone CI**
- [Rancher Meetup - May 2020 - Simplifying Your Cloud-Native Development Workflow With K3s, K3c and K3d (YouTube)](https://www.youtube.com/watch?v=hMr3prm9gDM)

@ -1,15 +1,22 @@
# Defaults
- multiple server nodes
- by default, when `--server` > 1 and no `--datastore-x` option is set, the first server node (server-0) will be the initializing server node
- the initializing server node will have the `--cluster-init` flag appended
- all other server nodes will refer to the initializing server node via `--server https://<init-node>:6443`
- API-Ports
- by default, we expose the API-Port (`6443`) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)
- port `6443` of the loadbalancer is then mapped to a specific (`--api-port` flag) or a random (default) port on the host system
- kubeconfig
- if `--kubeconfig-update-default` is set, we use the default loading rules to get the default kubeconfig:
- First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified)
- Second: default kubeconfig in home directory (e.g. `$HOME/.kube/config`)
- Networking
- [by default, k3d creates a new (docker) network for every cluster](./networking)
## Multiple server nodes
- by default, when `--server` > 1 and no `--datastore-x` option is set, the first server node (server-0) will be the initializing server node
- the initializing server node will have the `--cluster-init` flag appended
- all other server nodes will refer to the initializing server node via `--server https://<init-node>:6443`
## API-Ports
- by default, we expose the API-Port (`6443`) by forwarding traffic from the default server loadbalancer (nginx container) to the server node(s)
- port `6443` of the loadbalancer is then mapped to a specific (`--api-port` flag) or a random (default) port on the host system
## Kubeconfig
- if `--kubeconfig-update-default` is set, we use the default loading rules to get the default kubeconfig:
- First: kubeconfig specified via the KUBECONFIG environment variable (error out if multiple are specified)
- Second: default kubeconfig in home directory (e.g. `$HOME/.kube/config`)
## Networking
- [by default, k3d creates a new (docker) network for every cluster](./networking)

@ -5,23 +5,22 @@
## Introduction
By default, k3d creates a new (docker) network for every new cluster.
Using the `--network STRING` flag upon creation to connect to an existing network.
By default, k3d creates a new (docker) network for every new cluster.
Using the `--network STRING` flag upon creation to connect to an existing network.
Existing networks won't be managed by k3d together with the cluster lifecycle.
## Connecting to docker "internal"/pre-defined networks
### `host` network
When using the `--network` flag to connect to the host network (i.e. `k3d cluster create --network host`),
you won't be able to create more than **one server node**.
When using the `--network` flag to connect to the host network (i.e. `k3d cluster create --network host`), you won't be able to create more than **one server node**.
An edge case would be one server node (with agent disabled) and one agent node.
### `bridge` network
By default, every network that k3d creates is working in `bridge` mode.
But when you try to use `--network bridge` to connect to docker's internal `bridge` network, you may
run into issues with grabbing certificates from the API-Server. Single-Node clusters should work though.
By default, every network that k3d creates is working in `bridge` mode.
But when you try to use `--network bridge` to connect to docker's internal `bridge` network, you may run into issues with grabbing certificates from the API-Server.
Single-Node clusters should work though.
### `none` "network"

@ -2,11 +2,11 @@
## Introduction
As of k3d v4.0.0, released in January 2021, k3d ships with configuration file support for the `k3d cluster create` command.
As of k3d v4.0.0, released in January 2021, k3d ships with configuration file support for the `k3d cluster create` command.
This allows you to define all the things that you defined with CLI flags before in a nice and tidy YAML (as a Kubernetes user, we know you love it ;) ).
!!! info "Syntax & Semantics"
The options defined in the config file are not 100% the same as the CLI flags.
The options defined in the config file are not 100% the same as the CLI flags.
This concerns naming and style/usage/structure, e.g.
- `--api-port` is split up into a field named `kubeAPI` that has 3 different "child fields" (`host`, `hostIP` and `hostPort`)
@ -37,13 +37,13 @@ kind: Simple
## Config Options
The configuration options for k3d are continuously evolving and so is the config file (syntax) itself.
The configuration options for k3d are continuously evolving and so is the config file (syntax) itself.
Currently, the config file is still in an Alpha-State, meaning, that it is subject to change anytime (though we try to keep breaking changes low).
!!! info "Validation via JSON-Schema"
k3d uses a [JSON-Schema](https://json-schema.org/) to describe the expected format and fields of the configuration file.
This schema is also used to [validate](https://github.com/xeipuuv/gojsonschema#validation) a user-given config file.
This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. [here for `v1alpha2`](https://github.com/rancher/k3d/blob/main/pkg/config/v1alpha2/schema.json)) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.
k3d uses a [JSON-Schema](https://json-schema.org/) to describe the expected format and fields of the configuration file.
This schema is also used to [validate](https://github.com/xeipuuv/gojsonschema#validation) a user-given config file.
This JSON-Schema can be found in the specific config version sub-directory in the repository (e.g. [here for `v1alpha2`](https://github.com/rancher/k3d/blob/main/pkg/config/v1alpha2/schema.json)) and could be used to lookup supported fields or by linters to validate the config file, e.g. in your code editor.
### All Options: Example
@ -111,15 +111,14 @@ options:
## Config File vs. CLI Flags
k3d uses [`Cobra`](https://github.com/spf13/cobra) and [`Viper`](https://github.com/spf13/viper) for CLI and general config handling respectively.
k3d uses [`Cobra`](https://github.com/spf13/cobra) and [`Viper`](https://github.com/spf13/viper) for CLI and general config handling respectively.
This automatically introduces a "config option order of priority" ([precedence order](https://github.com/spf13/viper#why-viper)):
!!! info "Config Precedence Order"
Source: [spf13/viper#why-viper](https://github.com/spf13/viper#why-viper)
Source: [spf13/viper#why-viper](https://github.com/spf13/viper#why-viper)
>Internal Setting > **CLI Flag** > Environment Variable > **Config File** > (k/v store >) Defaults
Internal Setting > **CLI Flag** > Environment Variable > **Config File** > (k/v store >) Defaults
This means, that you can define e.g. a "base configuration file" with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments.
This means, that you can define e.g. a "base configuration file" with settings that you share across different clusters and override only the fields that differ between those clusters in your CLI flags/arguments.
For example, you use the same config file to create three clusters which only have different names and `kubeAPI` (`--api-port`) settings.
## References

@ -1,28 +1,39 @@
# Use Calico instead of Flannel
If you want to use NetworkPolicy you can use Calico in k3s instead of Flannel.
### 1. Download and modify the Calico descriptor
## 1. Download and modify the Calico descriptor
You can following the [documentation](https://docs.projectcalico.org/master/reference/cni-plugin/configuration)
And then you have to change the ConfigMap `calico-config`. On the `cni_network_config` add the entry for allowing IP forwarding
```json
"container_settings": {
"allow_ip_forwarding": true
}
"container_settings": {
"allow_ip_forwarding": true
}
```
Or you can directly use this [calico.yaml](calico.yaml) manifest
## 2. Create the cluster without flannel and with calico
On the k3s cluster creation :
- add the flag `--flannel-backend=none`. For this, on k3d you need to forward this flag to k3s with the option `--k3s-server-arg`.
- mount (`--volume`) the calico descriptor in the auto deploy manifest directory of k3s `/var/lib/rancher/k3s/server/manifests/`
So the command of the cluster creation is (when you are at root of the k3d repository)
```bash
k3d cluster create "${clustername}" --k3s-server-arg '--flannel-backend=none' --volume "$(pwd)/docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml"
k3d cluster create "${clustername}" \
--k3s-server-arg '--flannel-backend=none' \
--volume "$(pwd)/docs/usage/guides/calico.yaml:/var/lib/rancher/k3s/server/manifests/calico.yaml"
```
In this example :
- change `"${clustername}"` with the name of the cluster (or set a variable).
- change `"${clustername}"` with the name of the cluster (or set a variable).
- `$(pwd)/docs/usage/guides/calico.yaml` is the absolute path of the calico manifest, you can adapt it.
You can add other options, [see](../commands.md).
@ -30,12 +41,14 @@ You can add other options, [see](../commands.md).
The cluster will start without flannel and with Calico as CNI Plugin.
For watching for the pod(s) deployment
```bash
watch "kubectl get pods -n kube-system"
watch "kubectl get pods -n kube-system"
```
You will have something like this at beginning (with the command line `kubectl get pods -n kube-system`)
```
You will have something like this at beginning (with the command line `#!bash kubectl get pods -n kube-system`)
```bash
NAME READY STATUS RESTARTS AGE
helm-install-traefik-pn84f 0/1 Pending 0 3s
calico-node-97rx8 0/1 Init:0/3 0 3s
@ -46,7 +59,8 @@ coredns-8655855d6-cxtnr 0/1 Pending 0 2s
```
And when it finish to start
```
```bash
NAME READY STATUS RESTARTS AGE
metrics-server-7566d596c8-hwnqq 1/1 Running 0 56s
calico-node-97rx8 1/1 Running 0 57s
@ -58,10 +72,12 @@ traefik-758cd5fc85-x8p57 1/1 Running 0 28s
coredns-8655855d6-cxtnr 1/1 Running 0 56s
```
Note :
Note :
- you can use the auto deploy manifest or a kubectl apply depending on your needs
- <!> Calico is not as quick as Flannel (but it provides more features)
- :exclamation: Calico is not as quick as Flannel (but it provides more features)
## References
https://rancher.com/docs/k3s/latest/en/installation/network-options/
https://docs.projectcalico.org/getting-started/kubernetes/k3s/
- <https://rancher.com/docs/k3s/latest/en/installation/network-options/>
- <https://docs.projectcalico.org/getting-started/kubernetes/k3s/>

@ -1,12 +1,15 @@
# Running CUDA workloads
If you want to run CUDA workloads on the K3S container you need to customize the container.
CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime.
The K3S container itself also needs to run with this runtime. If you are using Docker you can install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html).
If you want to run CUDA workloads on the K3S container you need to customize the container.
CUDA workloads require the NVIDIA Container Runtime, so containerd needs to be configured to use this runtime.
The K3S container itself also needs to run with this runtime.
If you are using Docker you can install the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html).
## Building a customized K3S image
To get the NVIDIA container runtime in the K3S image you need to build your own K3S image. The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet. To get around this we need to build the image with a supported base image.
To get the NVIDIA container runtime in the K3S image you need to build your own K3S image.
The native K3S image is based on Alpine but the NVIDIA container runtime is not supported on Alpine yet.
To get around this we need to build the image with a supported base image.
### Adapt the Dockerfile
@ -51,7 +54,7 @@ ENTRYPOINT ["/bin/k3s"]
CMD ["agent"]
```
This [Dockerfile](cuda/Dockerfile) is based on the [K3S Dockerfile](https://github.com/rancher/k3s/blob/master/package/Dockerfile).
This [Dockerfile](cuda/Dockerfile) is based on the [K3s Dockerfile](https://github.com/rancher/k3s/blob/master/package/Dockerfile).
The following changes are applied:
1. Change the base images to Ubuntu 18.04 so the NVIDIA Container Runtime can be installed
@ -60,7 +63,7 @@ The following changes are applied:
### Configure containerd
We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3S provides a way to do this using a [config.toml.tmpl](cuda/config.toml.tmpl) file. More information can be found on the [K3S site](https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd).
We need to configure containerd to use the NVIDIA Container Runtime. We need to customize the config.toml that is used at startup. K3s provides a way to do this using a [config.toml.tmpl](cuda/config.toml.tmpl) file. More information can be found on the [K3s site](https://rancher.com/docs/k3s/latest/en/advanced/#configuring-containerd).
```go
[plugins.opt]

@ -2,7 +2,7 @@
## 1. via Ingress (recommended)
In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
In this example, we will deploy a simple nginx webserver deployment and make it accessible via ingress.
Therefore, we have to create the cluster in a way, that the internal port 80 (where the `traefik` ingress controller is listening on) is exposed on the host system.
1. Create a cluster, mapping the ingress port 80 to localhost:8081
@ -10,10 +10,11 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
`#!bash k3d cluster create --api-port 6550 -p "8081:80@loadbalancer" --agents 2`
!!! info "Good to know"
- `--api-port 6550` is not required for the example to work. It's used to have `k3s`'s API-Server listening on port 6550 with that port mapped to the host system.
- the port-mapping construct `8081:80@loadbalancer` means
- map port `8081` from the host to port `80` on the container which matches the nodefilter `loadbalancer`
- the `loadbalancer` nodefilter matches only the `serverlb` that's deployed in front of a cluster's server nodes
- `--api-port 6550` is not required for the example to work.
It's used to have `k3s`'s API-Server listening on port 6550 with that port mapped to the host system.
- the port-mapping construct `8081:80@loadbalancer` means:
"map port `8081` from the host to port `80` on the container which matches the nodefilter `loadbalancer`"
- the `loadbalancer` nodefilter matches only the `serverlb` that's deployed in front of a cluster's server nodes
- all ports exposed on the `serverlb` will be proxied to the same ports on all server nodes in the cluster
2. Get the kubeconfig file (redundant, as `k3d cluster create` already merges it into your default kubeconfig file)
@ -28,8 +29,9 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
`#!bash kubectl create service clusterip nginx --tcp=80:80`
5. Create an ingress object for it with `#!bash kubectl apply -f`
*Note*: `k3s` deploys [`traefik`](https://github.com/containous/traefik) as the default ingress controller
5. Create an ingress object for it by copying the following manifest to a file and applying with `#!bash kubectl apply -f thatfile.yaml`
**Note**: `k3s` deploys [`traefik`](https://github.com/containous/traefik) as the default ingress controller
```YAML
# apiVersion: networking.k8s.io/v1beta1 # for k3s < v1.19
@ -58,18 +60,17 @@ Therefore, we have to create the cluster in a way, that the internal port 80 (wh
## 2. via NodePort
1. Create a cluster, mapping the port 30080 from agent-0 to localhost:8082
1. Create a cluster, mapping the port `30080` from `agent-0` to `localhost:8082`
`#!bash k3d cluster create mycluster -p "8082:30080@agent[0]" --agents 2`
`#!bash k3d cluster create mycluster -p "8082:30080@agent[0]" --agents 2`
- **Note**: Kubernetes' default NodePort range is [`30000-32767`](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)
- **Note 1**: Kubernetes' default NodePort range is [`30000-32767`](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport)
- **Note 2**: You may as well expose the whole NodePort range from the very beginning, e.g. via `k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"` (See [this video from @portainer](https://www.youtube.com/watch?v=5HaU6338lAk))
- **Warning**: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!
- **Note**: You may as well expose the whole NodePort range from the very beginning, e.g. via `k3d cluster create mycluster --agents 3 -p "30000-32767:30000-32767@server[0]"` (See [this video from @portainer](https://www.youtube.com/watch?v=5HaU6338lAk))
- **Warning**: Docker creates iptable entries and a new proxy process per port-mapping, so this may take a very long time or even freeze your system!
... (Steps 2 and 3 like above) ...
... (Steps 2 and 3 like above) ...
1. Create a NodePort service for it with `#!bash kubectl apply -f`
1. Create a NodePort service for it by copying the following manifest to a file and applying it with `#!bash kubectl apply -f`
```YAML
apiVersion: v1

@ -67,7 +67,7 @@ configs:
When using secure registries, the [`registries.yaml` file](#registries-file) must include information about the certificates. For example, if you want to use images from the secure registry running at `https://my.company.registry`, you must first download a CA file valid for that server and store it in some well-known directory like `${HOME}/.k3d/my-company-root.pem`.
Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a `configs` section in the [`registries.yaml` file](#registries-file).
Then you have to mount the CA file in some directory in the nodes in the cluster and include that mounted file in a `configs` section in the [`registries.yaml` file](#registries-file).
For example, if we mount the CA file in `/etc/ssl/certs/my-company-root.pem`, the `registries.yaml` will look like:
```yaml
@ -85,7 +85,11 @@ configs:
Finally, we can create the cluster, mounting the CA file in the path we specified in `ca_file`:
`#!bash k3d cluster create --volume "${HOME}/.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml" --volume "${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem"`
```bash
k3d cluster create \
--volume "${HOME}/.k3d/my-registries.yaml:/etc/rancher/k3s/registries.yaml" \
--volume "${HOME}/.k3d/my-company-root.pem:/etc/ssl/certs/my-company-root.pem"
```
## Using a local registry
@ -97,8 +101,10 @@ Finally, we can create the cluster, mounting the CA file in the path we specifie
#### Create a dedicated registry together with your cluster
1. `#!bash k3d cluster create mycluster --registry-create`: This creates your cluster `mycluster` together with a registry container called `k3d-mycluster-registry`
- k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the `registries.yaml` file)
- the port, which the registry is listening on will be mapped to a random port on your host system
- k3d sets everything up in the cluster for containerd to be able to pull images from that registry (using the `registries.yaml` file)
- the port, which the registry is listening on will be mapped to a random port on your host system
2. Check the k3d command output or `#!bash docker ps -f name=k3d-mycluster-registry` to find the exposed port (let's use `12345` here)
3. Pull some image (optional) `#!bash docker pull alpine:latest`, re-tag it to reference your newly created registry `#!bash docker tag alpine:latest k3d-mycluster-registry:12345/testimage:local` and push it `#!bash docker push k3d-mycluster-registry:12345/testimage:local`
4. Use kubectl to create a new pod in your cluster using that image to see, if the cluster can pull from the new registry: `#!bash kubectl run --image k3d-mycluster-registry:12345/testimage:local testimage --command -- tail -f /dev/null` (creates a container that will not do anything but keep on running)
@ -153,7 +159,8 @@ You should test that you can
- push to your registry from your local development machine.
- use images from that registry in `Deployments` in your k3d cluster.
We will verify these two things for a local registry (located at `k3d-registry.localhost:12345`) running in your development machine. Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).
We will verify these two things for a local registry (located at `k3d-registry.localhost:12345`) running in your development machine.
Things would be basically the same for checking an external registry, but some additional configuration could be necessary in your local machine when using an authenticated or secure registry (please refer to Docker's documentation for this).
First, we can download some image (like `nginx`) and push it to our local registry with:
@ -195,8 +202,7 @@ Then you should check that the pod is running with `kubectl get pods -l "app=ngi
## Configuring registries for k3s <= v0.9.1
k3s servers below v0.9.1 do not recognize the `registries.yaml` file as described in
the in the beginning, so you will need to embed the contents of that file in a `containerd` configuration file.
k3s servers below v0.9.1 do not recognize the `registries.yaml` file as described in the in the beginning, so you will need to embed the contents of that file in a `containerd` configuration file.
You will have to create your own `containerd` configuration file at some well-known path like `${HOME}/.k3d/config.toml.tmpl`, like this:
??? registriesprev091 "config.toml.tmpl"

@ -1,6 +1,6 @@
# Handling Kubeconfigs
By default, k3d will update your default kubeconfig with your new cluster's details and set the current-context to it (can be disabled).
By default, k3d will update your default kubeconfig with your new cluster's details and set the current-context to it (can be disabled).
To get a kubeconfig set up for you to connect to a k3d cluster without this automatism, you can go different ways.
??? question "What is the default kubeconfig?"
@ -12,24 +12,31 @@ To get a kubeconfig set up for you to connect to a k3d cluster without this auto
## Getting the kubeconfig for a newly created cluster
1. Create a new kubeconfig file **after** cluster creation
- `#!bash k3d kubeconfig write mycluster`
- *Note:* this will create (or update) the file `$HOME/.k3d/kubeconfig-mycluster.yaml`
- *Tip:* Use it: `#!bash export KUBECONFIG=$(k3d kubeconfig write mycluster)`
- *Note 2*: alternatively you can use `#!bash k3d kubeconfig get mycluster > some-file.yaml`
- `#!bash k3d kubeconfig write mycluster`
- *Note:* this will create (or update) the file `$HOME/.k3d/kubeconfig-mycluster.yaml`
- *Tip:* Use it: `#!bash export KUBECONFIG=$(k3d kubeconfig write mycluster)`
- *Note 2*: alternatively you can use `#!bash k3d kubeconfig get mycluster > some-file.yaml`
2. Update your default kubeconfig **upon** cluster creation (DEFAULT)
- `#!bash k3d cluster create mycluster --kubeconfig-update-default`
- *Note:* this won't switch the current-context (append `--kubeconfig-switch-context` to do so)
- `#!bash k3d cluster create mycluster --kubeconfig-update-default`
- *Note:* this won't switch the current-context (append `--kubeconfig-switch-context` to do so)
3. Update your default kubeconfig **after** cluster creation
- `#!bash k3d kubeconfig merge mycluster --kubeconfig-merge-default`
- *Note:* this won't switch the current-context (append `--kubeconfig-switch-context` to do so)
- `#!bash k3d kubeconfig merge mycluster --kubeconfig-merge-default`
- *Note:* this won't switch the current-context (append `--kubeconfig-switch-context` to do so)
4. Update a different kubeconfig **after** cluster creation
- `#!bash k3d kubeconfig merge mycluster --output some/other/file.yaml`
- *Note:* this won't switch the current-context
- The file will be created if it doesn't exist
- `#!bash k3d kubeconfig merge mycluster --output some/other/file.yaml`
- *Note:* this won't switch the current-context
- The file will be created if it doesn't exist
!!! info "Switching the current context"
None of the above options switch the current-context by default.
This is intended to be least intrusive, since the current-context has a global effect.
None of the above options switch the current-context by default.
This is intended to be least intrusive, since the current-context has a global effect.
You can switch the current-context directly with the `kubeconfig merge` command by adding the `--kubeconfig-switch-context` flag.
## Removing cluster details from the kubeconfig
@ -39,7 +46,7 @@ It will also delete the respective kubeconfig file in `$HOME/.k3d/` if it exists
## Handling multiple clusters
`k3d kubeconfig merge` let's you specify one or more clusters via arguments _or_ all via `--all`.
All kubeconfigs will then be merged into a single file if `--kubeconfig-merge-default` or `--output` is specified.
If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. `$HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml`) will be returned.
`k3d kubeconfig merge` let's you specify one or more clusters via arguments _or_ all via `--all`.
All kubeconfigs will then be merged into a single file if `--kubeconfig-merge-default` or `--output` is specified.
If none of those two flags was specified, a new file will be created per cluster and the merged path (e.g. `$HOME/.k3d/kubeconfig-cluster1.yaml:$HOME/.k3d/cluster2.yaml`) will be returned.
Note, that with multiple cluster specified, the `--kubeconfig-switch-context` flag will change the current context to the cluster which was last in the list.

@ -2,15 +2,15 @@
!!! info "Important note"
For the best results (and less unexpected issues), choose 1, 3, 5, ... server nodes.
At least 2 cores and 4GiB of RAM are recommended.
At least 2 cores and 4GiB of RAM are recommended.
## Embedded dqlite
## Embedded etcd (old: dqlite)
Create a cluster with 3 server nodes using k3s' embedded dqlite database.
Create a cluster with 3 server nodes using k3s' embedded etcd (old: dqlite) database.
The first server to be created will use the `--cluster-init` flag and k3d will wait for it to be up and running before creating (and connecting) the other server nodes.
```bash
k3d cluster create multiserver --servers 3
k3d cluster create multiserver --servers 3
```
## Adding server nodes to a running cluster
@ -18,9 +18,9 @@ The first server to be created will use the `--cluster-init` flag and k3d will w
In theory (and also in practice in most cases), this is as easy as executing the following command:
```bash
k3d node create newserver --cluster multiserver --role server
k3d node create newserver --cluster multiserver --role server
```
!!! important "There's a trap!"
If your cluster was initially created with only a single server node, then this will fail.
That's because the initial server node was not started with the `--cluster-init` flag and thus is not using the dqlite backend.
If your cluster was initially created with only a single server node, then this will fail.
That's because the initial server node was not started with the `--cluster-init` flag and thus is not using the etcd (old: dqlite) backend.

Loading…
Cancel
Save