kubernetes增加污点,达到pod是否能在做节点运行
master node参与工作负载 (只在主节点执行)
使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。
这里搭建的是测试环境可以使用下面的命令使Master Node参与工作负载:
k8s是master节点的hostname
允许master节点部署pod,使用命令如下:
kubectl taint nodes --all node-role.kubernetes.io/master-
输出如下:
node “k8s” untainted
输出error: taint “node-role.kubernetes.io/master:” not found错误忽略。
禁止master部署pod
kubectl taint nodes k8s node-role.kubernetes.io/master=true:NoSchedule
####################################
今天创建完集群后,一个pod一直是pending状态,describe pod:
3 node(s) had taints that the pod didn't tolerate.
直译意思是节点有了污点无法容忍,执行kubectl get no -o yaml | grep taint -A 5 之后发现该节点是不可调度的。这是因为kubernetes出于安全考虑默认情况下无法在master节点上部署pod,于是用下面方法解决:
kubectl taint nodes --all node-role.kubernetes.io/master-
####################################
root@n8:~# kubectl get pods -n kube-system | grep tiller
tiller-deploy-65ff9d5d97-2xcmk 0/1 Pending 0 6m2s
root@nav8:~# kubectl describe pod tiller-deploy-65ff9d5d97-2xcmk -n kube-system
Name: tiller-deploy-65ff9d5d97-2xcmk
Namespace: kube-system
Priority: 0
Node: <none>
Labels: app=helm
name=tiller
pod-template-hash=65ff9d5d97
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/tiller-deploy-65ff9d5d97
Containers:
tiller:
Image: gcr.io/kubernetes-helm/tiller:v2.14.1
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jckn7 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-jckn7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jckn7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s (x5 over 6m22s) default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
因篇幅问题不能全部显示,请点此查看更多更全内容