Centos8 部署Kubernets1.18

Kubernets简介: Kubernetes 是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度、均衡容灾、服务注册、动态扩缩容等功能套件,本文介绍如何基于Centos8.1构建Kubernetes平台,在正式介绍之前,先介绍一下Kubernetes几个核心概念及其承担的功能。
  • Kubernets Master:Master节点是kubernets集群的网关和中枢,负责维护集群的健康状态,任务调度,及pod编排和其他节点的通信。Master主要由Api Server,Controller-Manager和 Scheduler,以及存储集群状态的服务Etcd构成(集群的信息存在etcd中,由api server共享给集群的各组件和客户端)。
  • Kubernets Slave:Slave节点是Kubernets集群的工作节点,负责接受Master节点的工作指令,并根据指令规则对集群进行创建pod,销毁pod,调整流量规则等指令。
  • Kubernets Node: Node节点的核心是Kubelet,Kubelet会在Api Server上注册当前的工作节点,并定期向Master节点汇报节点的资源使用情况。
  • Kube-Proxy:每个工作节点都要运行一个kube-proxy守护进程,它能够按需要为每一个Service对象生成iptables或者IPvs规则,从而捕获当前的Cluster IP的流量并将其转发到正确的后端pod对象。
  • Pod:在Kubernetes系统中,调度的最小颗粒不是单纯的容器,而是抽象成一个Pod,Pod是一个可以被创建、销毁、调度、管理的最小的部署单元。比如一个或一组容器。
  • Service:Service是建立在Pod对象的资源抽象,通过Label选择器选定一组pod对象,并为这组对象定义一个固定的访问入口(一般是一个IP,也称Service IP),若Kubernets集群存在Dns组件,它会为Service创建一个Dns名称,以便客户端进行服务发现。同时将到达Service IP的请求负载均衡到后端的Pod节点,也可以将集群外部的流量引入到集群中来,因此Service对象从本质上讲就是一个四层代理.
  • Replication Controllers:Replication Controller是Kubernetes系统中最有用的功能,实现复制多个Pod副本,往往一个应用需要多个Pod来支撑,并且可以保证其复制的副本数,即使副本所调度分配的主宿机出现异常,通过Replication Controller可以保证在其它主宿机启用同等数量的Pod。Replication Controller可以通过repcon模板来创建多个Pod副本,同样也可以直接复制已存在Pod,需要通过Label selector来关联。现已经被ReplicaSet取代。
简单介绍完,下面开始安装了,安装过程很复杂,组件很多,Google基础镜像下载过程中还有墙,一言难尽!在家搭建也是耗了一下午才完整的搭建了一遍,其中dashboard安装过程比较复杂一些,不过多踩几个坑,你就会了,哈哈哈哈。。。

1,系统准备

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
查看系统版本
[root@master ~]# cat /etc/redhat-release
CentOS Linux release 8.1.1911 (Core)
[root@master ~]#
关闭防火墙
[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
添加本地解析
[root@master yum.repos.d]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
149.28.167.215 master
[root@master yum.repos.d]#
关闭swap空间
[root@master /]# swapoff -a
配置内核参数,将桥接的IPv4流量传递到iptables的链
[root@master /]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master /]#

2,添加阿里云仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@master ~]# cd /etc/yum.repos.d/
添加阿里云仓库
[root@master ~]# wget http://mirrors.aliyun.com/repo/Centos-8.repo
添加docker仓库
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
添加阿里云加速器
[root@master /]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
[root@master /]#
添加kubernets源
[root@master yum.repos.d]# cat kubernetes.repo
[kubernets]
name=kubernets
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
[root@master yum.repos.d]#

3,安装docker和k8s包文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
安装常用的包
[root@master /]# yum -y install vim bash-completion net-tools gcc
安装docker-ce依赖
[root@master /]# yum -y install yum-utils device-mapper-persistent-data
安装docker-ce
[root@master /]# wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
[root@master /]# yum -y install containerd.io-1.2.6-3.3.el7.x86_64.rpm
[root@master /]# yum -y install docker-ce
[root@master /]# systemctl enable docker.service
安装kubectl,kubelet, kubeadm
[root@master /]# yum -y install kubectl kubelet kubeadm
[root@master /]# systemctl enable kubelet

4,初始化kubernets集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。
[root@master ~]# kubeadm init --kubernetes-version=1.18.0 --apiserver-advertise-address=149.28.167.215 --image-repository registry.aliyuncs.com/google_containers --service-cidr 10.10.0.0/16 --pod-network-cidr 10.122.0.0/16
安装信息:
W0531 05:44:30.640040 28006 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Pleasefollow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 149.28.167.215]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [149.28.167.215 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [149.28.167.215 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0531 05:47:40.867693 28006 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0531 05:47:40.869240 28006 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests".This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.004337 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: si2pfc.70qmie00uuxuqgpw
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 149.28.167.215:6443 --token si2pfc.70qmie00uuxuqgpw \
--discovery-token-ca-cert-hash sha256:2c4d66e9eff20ee3f5f0cf9f4e67630f2545ec9b49c040e9a33f5c3bd023774b
创建kubelet
[root@master /]# mkdir -p $HOME/.kube
[root@master /]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master /]# chown root:root $HOME/.kube/config
[root@master /]# source <(kubectl completion bash)
查看Pod
[root@master /]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 7h5m v1.18.3
[root@master /]#
node节点为NotReady,因为corednspod没有启动,缺少网络pod
查看所有的pod
[root@master /]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7ff77c879f-bsnpp 1/1 Pending 1 7h5m
kube-system coredns-7ff77c879f-gj8kw 1/1 Pending 1 7h5m
kube-system etcd-master 1/1 Running 1 7h5m
kube-system kube-apiserver-master 1/1 Running 1 7h5m
kube-system kube-controller-manager-master 1/1 Running 1 7h5m
kube-system kube-proxy-fhfn5 1/1 Running 1 7h5m
kube-system kube-scheduler-master 1/1 Running 1 7h5m
kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-kcjc7 1/1 Running 0 5h54m
kubernetes-dashboard kubernetes-dashboard-5d4dc8b976-x7k6s 1/1 Running 0 5h54m
[root@master /]#

5,安装网络插件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@master /]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
此时集群状态正常
[root@master /]# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-76d4774d89-vzd4g 1/1 Running 1 6h42m
kube-system calico-node-9dm6v 1/1 Running 1 6h42m
kube-system coredns-7ff77c879f-bsnpp 1/1 Running 1 7h5m
kube-system coredns-7ff77c879f-gj8kw 1/1 Running 1 7h5m
kube-system etcd-master 1/1 Running 1 7h5m
kube-system kube-apiserver-master 1/1 Running 1 7h5m
kube-system kube-controller-manager-master 1/1 Running 1 7h5m
kube-system kube-proxy-fhfn5 1/1 Running 1 7h5m
kube-system kube-scheduler-master 1/1 Running 1 7h5m
kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-kcjc7 1/1 Running 0 5h54m
kubernetes-dashboard kubernetes-dashboard-5d4dc8b976-x7k6s 1/1 Running 0 5h54m
[root@master /]#

6,创建kubernetes-dashboard证书

1
2
3
4
5
6
[root@master ~]# mkdir dashboard-certs
[root@master ~]# kubectl create namespace kubernetes-dashboard
[root@master ~]# openssl genrsa -out dashboard.key 2048
[root@master ~]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
[root@master ~]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
[root@master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

7,安装kubernetes-dashboard

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
下载dashboard服务文件
[root@master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
[root@master ~]#
修改一些内容:
[root@master ~]# cat recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000 #这个为服务端口
selector:
k8s-app: kubernetes-dashboard
[root@master01 ~]#
[root@master01 ~]# kubectl create -f recommended.yaml

8,添加授权用户

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
添加授权用户:
[root@master ~]# cat role-demo.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-admin
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-admin
namespace: kube-system
[root@master ~]#
[root@master ~]# kubectl apply -f role-demo.yaml
serviceaccount/kubernetes-dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created

9,安装完验证登录

1
2
3
4
5
6
能看到这2个pod就是正常的
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.10.131.194 <none> 8000/TCP 6h9m
kubernetes-dashboard NodePort 10.10.21.147 <none> 443:30000/TCP 5h52m
[root@master ~]#
浏览器访问主机ip:30000端口即可

1.png

10,token登录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@master ~]# kubectl -n kubernetes-dashboard get secret
NAME TYPE DATA AGE
default-token-csz28 kubernetes.io/service-account-token 3 6h13m
kubernetes-dashboard-certs Opaque 0 6h13m
kubernetes-dashboard-csrf Opaque 1 6h13m
kubernetes-dashboard-key-holder Opaque 2 6h13m
kubernetes-dashboard-token-7m2qv kubernetes.io/service-account-token 3 6h13m
[root@master ~]#
获取token
[root@master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard-admin-token | awk '{print $1}')
Name: kubernetes-dashboard-admin-token-x68x5
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
kubernetes.io/service-account.uid: 47e3655a-9577-405d-9016-74cb2ae15f5a
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkVPakFTUU5wWHNnd2p5MjlwQi0yRHhUZ3ZxNlQ5QV9aSkRLSy1HZ3RZM0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi14Njh4NSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjQ3ZTM2NTVhLTk1NzctNDA1ZC05MDE2LTc0Y2IyYWUxNWY1YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.bZR3CWF8s37UUvywyTv7Q8WAi3BZy04lyN81tg4MJBZcg6r6AN8DlrfEO8B2fltW1XNsulzltedwGwb5Sc198YE9W-k7V9zBcyFMBVtHQRxzhekUC-lHpNpo57ITfyTalvpdvkPDHSOwaV6vAOqiOESD-A9wSwk6xCDvkpbqkEk12PwKnmiS0_j2U7uN_H4_FIRIMz30Gt1WU70PCdG26c-zV4z2aRPd_RR13JPxCGq7Q1_azOOn5IaPXDtHecjWvFQSL9NuBU6jRdI1R0Z5XIFJopze5POQIEIQAJv2ESHMh_0VMTe1spnWFD8N3N4vAGCtxXtOo6UE21BEI2HBLw
ca.crt: 1025 bytes
namespace: 11 bytes
[root@master ~]#
使用token登录dashboard

2.png

11,其他异常

1
2
遇到dashboard没有资源
[root@master ~]# kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccount
如图:

3.png