程序员的资源宝库

网站首页 > gitee 正文

使用sealos安装K8S集群报错解决(k8s安装ceph)

sanyeah 2024-04-03 18:37:53 gitee 3 ℃ 0 评论

Feb 22 13:03:29 host13 systemd[1]: kubelet.service holdoff time over, scheduling restart. Feb 22 13:03:29 host13 systemd[1]: Starting kubelet: The Kubernetes Node Agent... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: * Applying /usr/lib/sysctl.d/00-system.conf ... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.bridge.bridge-nf-call-ip6tables = 0 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.bridge.bridge-nf-call-iptables = 0 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.bridge.bridge-nf-call-arptables = 0 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: * Applying /usr/lib/sysctl.d/50-default.conf ... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: kernel.sysrq = 16 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: kernel.core_uses_pid = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.conf.default.rp_filter = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.conf.all.rp_filter = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.conf.default.accept_source_route = 0 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.conf.all.accept_source_route = 0 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.conf.default.promote_secondaries = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.conf.all.promote_secondaries = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: fs.protected_hardlinks = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: fs.protected_symlinks = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: * Applying /etc/sysctl.d/99-sysctl.conf ... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: vm.max_map_count = 655360 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: * Applying /usr/lib/sysctl.d/elasticsearch.conf ... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: vm.max_map_count = 262144 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: * Applying /etc/sysctl.d/k8s.conf ... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.bridge.bridge-nf-call-ip6tables = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.bridge.bridge-nf-call-iptables = 1 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: * Applying /etc/sysctl.conf ... Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: vm.max_map_count = 655360 Feb 22 13:03:29 host13 kubelet-pre-start.sh[6019]: net.ipv4.ip_forward = 1 Feb 22 13:03:29 host13 systemd[1]: Started kubelet: The Kubernetes Node Agent. Feb 22 13:03:29 host13 kubelet[6041]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 22 13:03:29 host13 kubelet[6041]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 22 13:03:29 host13 kubelet[6041]: I0222 13:03:29.893320 6041 server.go:417] Version: v1.18.13-rc.0 Feb 22 13:03:29 host13 kubelet[6041]: I0222 13:03:29.893656 6041 plugins.go:100] No cloud provider specified. Feb 22 13:03:29 host13 kubelet[6041]: I0222 13:03:29.893680 6041 server.go:838] Client rotation is on, will bootstrap in background Feb 22 13:03:29 host13 kubelet[6041]: I0222 13:03:29.896813 6041 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 22 13:03:29 host13 kubelet[6041]: I0222 13:03:29.898337 6041 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.205044 6041 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206560 6041 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: [] Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206583 6041 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206762 6041 topology_manager.go:126] [topologymanager] Creating topology manager with none policy Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206777 6041 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206785 6041 container_manager_linux.go:306] Creating device plugin manager: true Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206893 6041 client.go:75] Connecting to docker on unix:///var/run/docker.sock Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.206915 6041 client.go:92] Start docker client with request timeout=2m0s Feb 22 13:03:30 host13 kubelet[6041]: W0222 13:03:30.242454 6041 docker_service.go:562] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.242523 6041 docker_service.go:238] Hairpin mode set to "hairpin-veth" Feb 22 13:03:30 host13 kubelet[6041]: W0222 13:03:30.242700 6041 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d Feb 22 13:03:30 host13 kubelet[6041]: W0222 13:03:30.248855 6041 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.248955 6041 docker_service.go:253] Docker cri networking managed by cni Feb 22 13:03:30 host13 kubelet[6041]: W0222 13:03:30.249072 6041 cni.go:237] Unable to update cni config: no networks found in /etc/cni/net.d Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.285523 6041 docker_service.go:258] Docker Info: &{ID:NIOZ:R54X:RCP3:KXAV:LB7P:WZ3M:4VWD:GZYC:3POA:V3B3:7VSS:TESZ Containers:8 ContainersRunning:8 ContainersPaused:0 ContainersStopped:0 Images:12 Driver:devicemapper DriverStatus:[[Pool Name docker-253:0-2684678352-pool] [Pool Blocksize 65.54kB] [Base Device Size 10.74GB] [Backing Filesystem xfs] [Udev Sync Supported true] [Data file /dev/loop0] [Metadata file /dev/loop1] [Data loop file /var/lib/docker/devicemapper/devicemapper/data] [Metadata loop file /var/lib/docker/devicemapper/devicemapper/metadata] [Data Space Used 1.668GB] [Data Space Total 107.4GB] [Data Space Available 105.7GB] [Metadata Space Used 2.408MB] [Metadata Space Total 2.147GB] [Metadata Space Available 2.145GB] [Thin Pool Minimum Free Space 10.74GB] [Deferred Removal Enabled true] [Deferred Deletion Enabled true] [Deferred Deleted Device Count 0] [Library Version 1.02.107-RHEL7 (2015-10-14)]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:65 OomKillDisable:true NGoroutines:68 SystemTime:2021-02-22T13:03:30.250443356+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-327.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000392380 NCPU:8 MemTotal:33565855744 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:host13 Labels:[] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: Rem Feb 22 13:03:30 host13 kubelet[6041]: oteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:894b81a4b802e4eb2a91d1ce216b8817763c29fb Expected:894b81a4b802e4eb2a91d1ce216b8817763c29fb} RuncCommit:{ID:425e105d5a03fabd737a126ad93d62a9eeede87f Expected:425e105d5a03fabd737a126ad93d62a9eeede87f} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release. WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use. Feb 22 13:03:30 host13 kubelet[6041]: Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.]} Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.285848 6041 docker_service.go:271] Setting cgroupDriver to cgroupfs Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.322944 6041 remote_runtime.go:59] parsed scheme: "" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.322986 6041 remote_runtime.go:59] scheme "" not registered, fallback to default scheme Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323061 6041 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>} Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323088 6041 clientconn.go:933] ClientConn switching balancer to "pick_first" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323193 6041 remote_image.go:50] parsed scheme: "" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323207 6041 remote_image.go:50] scheme "" not registered, fallback to default scheme Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323232 6041 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>} Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323246 6041 clientconn.go:933] ClientConn switching balancer to "pick_first" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323323 6041 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.323371 6041 kubelet.go:317] Watching apiserver Feb 22 13:03:30 host13 kubelet[6041]: E0222 13:03:30.685903 6041 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. Feb 22 13:03:30 host13 kubelet[6041]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.720381 6041 kuberuntime_manager.go:217] Container runtime docker initialized, version: 19.03.2, apiVersion: 1.40.0 Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.721562 6041 server.go:1126] Started kubelet Feb 22 13:03:30 host13 kubelet[6041]: E0222 13:03:30.721862 6041 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.722387 6041 server.go:145] Starting to listen on 0.0.0.0:10250 Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.723176 6041 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.723530 6041 volume_manager.go:265] Starting Kubelet Volume Manager Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.723703 6041 desired_state_of_world_populator.go:139] Desired state populator starts to run Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.723831 6041 server.go:393] Adding debug handlers to kubelet server. Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.752264 6041 status_manager.go:158] Starting to sync pod status with apiserver Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.752326 6041 kubelet.go:1824] Starting kubelet main sync loop. Feb 22 13:03:30 host13 kubelet[6041]: E0222 13:03:30.752390 6041 kubelet.go:1848] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] Feb 22 13:03:30 host13 kubelet[6041]: E0222 13:03:30.768691 6041 kubelet.go:2190] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.768912 6041 clientconn.go:106] parsed scheme: "unix" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.768929 6041 clientconn.go:106] scheme "unix" not registered, fallback to default scheme Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.769101 6041 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.769123 6041 clientconn.go:933] ClientConn switching balancer to "pick_first" Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.823643 6041 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.823861 6041 kuberuntime_manager.go:984] updating runtime config through cri with podcidr 100.64.0.0/24 Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.824482 6041 docker_service.go:354] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:100.64.0.0/24,},} Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.824785 6041 kubelet_network.go:77] Setting Pod CIDR: -> 100.64.0.0/24 Feb 22 13:03:30 host13 kubelet[6041]: E0222 13:03:30.841266 6041 factory.go:340] devicemapper filesystem stats will not be reported: unable to find thin_ls binary Feb 22 13:03:30 host13 kubelet[6041]: E0222 13:03:30.852701 6041 kubelet.go:1848] skipping pod synchronization - container runtime status check may not have completed yet Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.935460 6041 kubelet_node_status.go:70] Attempting to register node host13 Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.945438 6041 kubelet_node_status.go:112] Node host13 was previously registered Feb 22 13:03:30 host13 kubelet[6041]: I0222 13:03:30.946146 6041 kubelet_node_status.go:73] Successfully registered node host13 Feb 22 13:03:31 host13 kubelet[6041]: I0222 13:03:31.025496 6041 cpu_manager.go:184] [cpumanager] starting with none policy Feb 22 13:03:31 host13 kubelet[6041]: I0222 13:03:31.025526 6041 cpu_manager.go:185] [cpumanager] reconciling every 10s Feb 22 13:03:31 host13 kubelet[6041]: I0222 13:03:31.025558 6041 state_mem.go:36] [cpumanager] initializing new in-memory state store Feb 22 13:03:31 host13 kubelet[6041]: I0222 13:03:31.025852 6041 state_mem.go:88] [cpumanager] updated default cpuset: "" Feb 22 13:03:31 host13 kubelet[6041]: I0222 13:03:31.025864 6041 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" Feb 22 13:03:31 host13 kubelet[6041]: I0222 13:03:31.025882 6041 policy_none.go:43] [cpumanager] none policy: Start Feb 22 13:03:31 host13 kubelet[6041]: F0222 13:03:31.026735 6041 kubelet.go:1386] Failed to start ContainerManager failed to initialize top level QOS containers: failed to update top level Burstable QOS cgroup : failed to set supported cgroup subsystems for cgroup [kubepods burstable]: failed to find subsystem mount for required subsystem: pids Feb 22 13:03:31 host13 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a Feb 22 13:03:31 host13 systemd[1]: Unit kubelet.service entered failed state. Feb 22 13:03:31 host13 systemd[1]: kubelet.service failed.

sealos版本使用的是kube1.18.13-rc.0.tar.gz,由于Linux版本引发的问题,版本要大于 Contos 7.6 可以避免问题,因线上环境无法升级用此方法解决了

[root@app13 resource-catalog]# lsb_release -a
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.2.1511 (Core)
Release: 7.2.1511
Codename: Core
解决方法
修改改目录下的文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,内容如下

# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --feature-gates SupportPodPidsLimit=false --feature-gates SupportNodePidsLimit=false"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
 # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
 # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
 # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

Tags:

本文暂时没有评论,来添加一个吧(●'◡'●)

欢迎 发表评论:

最近发表
标签列表