kind

  • 新建默认集群:sudo kind create cluster,会创建名为 kind 的集群

    • 需预先安装 docker,否则报错如下

      image-20240721151910409

    • 镜像通过 mirror 提前下载,否则 kind 会一直卡住

      1
      sudo docker pull kindest/node:v1.30.0
    • 成功

      image-20240721163019171

    • 如果只有默认集群,那么 kubectl get nodes 报错如下

      image-20240721181116756

    • 有自定义集群(即不止 control-plane 的话) => 只能看到 controk-plane 是因为此时默认集群是 kind

      image-20240727185428330

  • 新建指定名称集群:kind create cluster –name kind-2

  • 查看所有集群:kind get clusters

    image-20240721170105554

  • 查看活跃的集群:kubectl config get-contexts

    image-20240721205120677

  • 切换默认集群:kubectl config use-context

    image-20240721205148763

  • 查看指定集群信息:kubectl cluster-info –context kind-kind

    image-20240721165532903

  • 删除指定集群:kind delete cluster –name kind-2

    image-20240721170133068

  • 查看节点:kind get nodes 默认只有一个 control-plane

    image-20240721173140062

kind error

kind 多集群

问题

  • 使用 kind 创建第三个集群时,报错 ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"

image-20240721184227233

解决

  • ERROR: failed to create cluster: could not find a log line that matches “Reached target .Multi-User System.|detected cgroup v1” #3423

  • 修改 /etc/sysctl.conf,新增以下内容,生效 sysctl -p

    1
    2
    fs.inotify.max_user_watches = 524288
    fs.inotify.max_user_instances = 512
  • 引入新问题 => 虚拟机内存分配太小了,从 2 G 到 3 G 情况有所缓解,但还没解决,下次再看 240721

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    ERROR: failed to create cluster: failed to init node with kubeadm: command "docker exec --privileged example2-control-plane kubeadm init --skip-phases=preflight --config=/kind/kubeadm.conf --skip-token-print --v=6" failed with error: exit status 1
    Command Output: I0721 13:23:18.293346 139 initconfiguration.go:260] loading configuration from "/kind/kubeadm.conf"
    W0721 13:23:18.321404 139 initconfiguration.go:348] [config] WARNING: Ignored YAML document with GroupVersionKind kubeadm.k8s.io/v1beta3, Kind=JoinConfiguration

    couldn't initialize a Kubernetes cluster
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:110
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:125
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:259
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
    k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
    github.com/spf13/cobra.(*Command).execute
    github.com/spf13/[email protected]/command.go:940
    github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/[email protected]/command.go:1068
    github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/[email protected]/command.go:992
    k8s.io/kubernetes/cmd/kubeadm/app.Run
    k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
    main.main
    k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
    runtime.main
    runtime/proc.go:271
    runtime.goexit
    runtime/asm_amd64.s:1695
    error execution phase wait-control-plane
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:260
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:446
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:232
    k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:128
    github.com/spf13/cobra.(*Command).execute
    github.com/spf13/[email protected]/command.go:940
    github.com/spf13/cobra.(*Command).ExecuteC
    github.com/spf13/[email protected]/command.go:1068
    github.com/spf13/cobra.(*Command).Execute
    github.com/spf13/[email protected]/command.go:992
    k8s.io/kubernetes/cmd/kubeadm/app.Run
    k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:52
    main.main
    k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
    runtime.main
    runtime/proc.go:271
    runtime.goexit
    runtime/asm_amd64.s:1695

Unable to connect to the server: net/http: TLS handshake timeout