centos7安装k8s-v1.18

系统准备

查看系统版本

[root@test-1]# cat /etc/centos-release
CentOS Linux release 7.8.2003 (Core)

配置网络 /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
NAME=ens33
UUID=63317378-0754-43ca-a0bd-9f3540024853
DEVICE=ens33
ONBOOT=yes
BOOTPROTO=none
IPADDR=192.168.23.200
NETMASK=255.255.255.0
GATEWAY=192.168.23.2
DNS1=114.114.114.114
ZONE=public

添加阿里源

rm -rfv /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

配置主机名/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.23.200 test-1
192.168.23.201 test-2
192.168.23.202 test-3

关闭swap

swapoff -a

注释swap分区/etc/fstab

/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=8d8fc854-3a82-410e-9984-2f6d30336959 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

配置内核参数,将桥接的IPv4流量传递到iptables的链

[root@test-1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

安装常用包

yum install vim bash-completion net-tools gcc -y

使用aliyun源安装docker-ce

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce

安装docker-ce如果出现以下错

[root@master01 ~]# yum -y install docker-ce
CentOS-8 - Base - mirrors.aliyun.com                                                                               14 kB/s | 3.8 kB     00:00
CentOS-8 - Extras - mirrors.aliyun.com                                                                            6.4 kB/s | 1.5 kB     00:00
CentOS-8 - AppStream - mirrors.aliyun.com                                                                          16 kB/s | 4.3 kB     00:00
Docker CE Stable - x86_64                                                                                          40 kB/s |  22 kB     00:00
Error:
 Problem: package docker-ce-3:19.03.8-3.el7.x86_64 requires containerd.io >= 1.2.2-3, but none of the providers can be installed
  - cannot install the best candidate for the job
  - package containerd.io-1.2.10-3.2.el7.x86_64 is excluded
  - package containerd.io-1.2.13-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.2-3.3.el7.x86_64 is excluded
  - package containerd.io-1.2.2-3.el7.x86_64 is excluded
  - package containerd.io-1.2.4-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.5-3.1.el7.x86_64 is excluded
  - package containerd.io-1.2.6-3.3.el7.x86_64 is excluded
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)

解决方法

wget https://download.docker.com/linux/centos/7/x86_64/edge/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum install containerd.io-1.2.6-3.3.el7.x86_64.rpm

然后再安装docker-ce即可成功

添加aliyundocker仓库加速器

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://fl791z1h.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker

安装kubectl、kubelet、kubeadm

添加阿里kubernetes源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装

yum -y install kubectl kubelet kubeadm
systemctl enable kubelet

初始化k8s集群

master执行

kubeadm init --kubernetes-version=1.18.0  \
--apiserver-advertise-address=192.168.23.200   \
--image-repository registry.aliyuncs.com/google_containers  \
--service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16

POD的网段为: 10.122.0.0/16, api server地址就是master本机IP。这一步很关键,由于kubeadm 默认从官网k8s.grc.io下载所需镜像,国内无法访问,因此需要通过–image-repository指定阿里云镜像仓库地址。

集群初始化成功后返回如下信息:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.23.200:6443 --token lvr1m8.agdwfvygxmb3sbw0 \
    --discovery-token-ca-cert-hash sha256:3a51ad14064fe7661e5ff45eba2222b29c3a541c025cfa911678981ac6bd1dd2

记录生成的最后部分内容,此内容需要在其它节点加入Kubernetes集群时执行。

根据提示创建kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

执行下面命令,使kubectl可以自动补充

source <(kubectl completion bash)

查看节点,pod

[root@test-1 ~]# kubectl get node
NAME     STATUS   ROLES    AGE    VERSION
test-1   Ready    master   3h7m   v1.18.2
[root@master01 ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                                        READY   STATUS    RESTARTS   AGE
kube-system   coredns-7ff77c879f-fsj9l                    0/1     Pending   0          2m12s
kube-system   coredns-7ff77c879f-q5ll2                    0/1     Pending   0          2m12s
kube-system   etcd-master01.paas.com                      1/1     Running   0          2m22s
kube-system   kube-apiserver-master01.paas.com            1/1     Running   0          2m22s
kube-system   kube-controller-manager-master01.paas.com   1/1     Running   0          2m22s
kube-system   kube-proxy-th472                            1/1     Running   0          2m12s
kube-system   kube-scheduler-master01.paas.com            1/1     Running   0          2m22s
[root@master01 ~]#

node节点为NotReady,因为corednspod没有启动,缺少网络pod

#拷贝 master机器上 $HOME/.kube/config 到node节点上
#不然执行kubectl 会报错 error: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
ssh test-1 'mkdir ~/.kube'
ssh test-2 'mkdir ~/.kube'

scp $HOME/.kube/config root@test-1:~/.kube
scp $HOME/.kube/config root@test-2:~/.kube

安装calico网络

所有节点执行

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

node节点配置

kubeadm join 192.168.23.200:6443 --token lvr1m8.agdwfvygxmb3sbw0 \
    --discovery-token-ca-cert-hash sha256:3a51ad14064fe7661e5ff45eba2222b29c3a541c025cfa911678981ac6bd1dd2

查看集群状态

[root@test-1 ~]#kubectl get node
NAME     STATUS   ROLES    AGE    VERSION
test-1   Ready    master   3h7m   v1.18.2
test-2   Ready    <none>   3h     v1.18.2
test-3   Ready    <none>   3h     v1.18.2

安装kubernetes-dashboard

官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

修改recommended.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort # 新增
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000 # 新增
  selector:
    k8s-app: kubernetes-dashboard

创建

kubectl create -f recommended.yaml

查看安装结果

[root@test-1 ~]#kubectl get pods -A  -o wide
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE    IP               NODE     NOMINATED NODE   READINESS GATES
kube-system            calico-kube-controllers-789f6df884-b7td2     1/1     Running   0          165m   10.122.139.194   test-3   <none>           <none>
kube-system            calico-node-8c2b2                            1/1     Running   0          165m   192.168.23.202   test-3   <none>           <none>
kube-system            calico-node-g8cmj                            1/1     Running   0          165m   192.168.23.201   test-2   <none>           <none>
kube-system            calico-node-t7ghm                            1/1     Running   0          165m   192.168.23.200   test-1   <none>           <none>
kube-system            coredns-7ff77c879f-8hfrq                     1/1     Running   0          173m   10.122.17.1      test-2   <none>           <none>
kube-system            coredns-7ff77c879f-jzgx8                     1/1     Running   0          173m   10.122.139.193   test-3   <none>           <none>
kube-system            etcd-test-1                                  1/1     Running   0          173m   192.168.23.200   test-1   <none>           <none>
kube-system            kube-apiserver-test-1                        1/1     Running   0          173m   192.168.23.200   test-1   <none>           <none>
kube-system            kube-controller-manager-test-1               1/1     Running   0          173m   192.168.23.200   test-1   <none>           <none>
kube-system            kube-proxy-5j9td                             1/1     Running   0          167m   192.168.23.202   test-3   <none>           <none>
kube-system            kube-proxy-hqfzq                             1/1     Running   0          173m   192.168.23.200   test-1   <none>           <none>
kube-system            kube-proxy-w6wzt                             1/1     Running   0          167m   192.168.23.201   test-2   <none>           <none>
kube-system            kube-scheduler-test-1                        1/1     Running   0          173m   192.168.23.200   test-1   <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-q2m4p   1/1     Running   0          23m    10.122.139.195   test-3   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-7b544877d5-rdz4p        1/1     Running   0          23m    10.122.17.2      test-2   <none>           <none>

[root@test-1 ~]#kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.10.47.66    <none>        8000/TCP        23m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.10.57.249   <none>        443:30000/TCP   23m   k8s-app=kubernetes-dashboard

创建dashboard管理员

创建dashboard-admin.yaml文件

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard

kubectl create -f ./dashboard-admin.yaml

为用户分配权限

创建dashboard-admin-bind-cluster-role.yaml文件。

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

kubectl create -f ./dashboard-admin-bind-cluster-role.yaml

查看并复制用户Token

在命令行执行如下命令。

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')

可以看到,此时的Token值为:

eyJhbGciOiJSUzI1NiIsImtpZCI6Im9CdXlmODBpbUtGLUJLd3k1OEczcXVObjRyendoR2xHcHBJdDJDdm1jdzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZGd4dG4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOWI4ZmNjNTYtMWE2Yi00YzQyLWE2MWMtOWNiOTVkYjU5ZDIxIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.BMFDfjacZtdp8R_htqade_hQMArdOxz_WFga-0GjV0LCa4LQOdOCPLQ--P0bEW1Wz1dAsvYhu2tLuqmoU3792d5RJOtLWSxkO5KGMSy2w-QfM-fikKtRzUrP7ttYqwNr_uNUaL3ONe9xwtTPIKaunC8ub5QwYuxqxUSClZ9UVkWfR-fZ-xkE40iFWeFkCaQcByJAQ3087z9dqg8Ps7QryDnCyeMgbVhPr3U5eL3ZoYHbfT9t74YqdyRSo8q0n-RxOxFHA_wmx11_DTzTP4o0tAb8wNgNaTCM-G3JqoG-7gXfK9DBX8ApqLlb-f1iIYmvy8o1ehFldBaJZFxb_ixV_w

查看dashboard界面

在浏览器中打开链接 https://192.168.23.200:30000 ,如下所示。

这里,我们选择Token方式登录,并输入在命令行获取到的Token

点击登录后进入dashboard,如下所示。

至此k8s-V1.18安装完成