Kubernetes部署

说明:

​ k8s系列部署使用都是基于Ubuntu系统的,如果你也想使用Ubuntu系统和我保持一致,请参考《Ubuntu最小化安装》

==这里是基于kubeadm来部署k8==

主机名 WanIP 配置
master231 10.0.0.231 2c 4G 50G
worker232 10.0.0.232 2c 3G 50G
worker233 10.0.0.233 2c 3G 50G

部署K8S集群准备

1.关闭swap分区

Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定—ignore-preflight-errors=Swap来解决

1
2
[root@master231:1 ~]# swapoff -a && sysctl -w vm.swappiness=0  # 临时关闭
[root@master231:1 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # 基于配置文件关闭

2.查看swap分区是否关闭

1
2
3
4
5
[root@master231:1 ~]# free -h
total used free shared buff/cache available
Mem: 3.8Gi 855Mi 1.4Gi 2.0Mi 1.5Gi 2.7Gi
Swap: 0B 0B 0B
#swap 都是0B

3.确保各个节点MAC地址或product_uuid唯一

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@master231:1 ~]# ifconfig  eth0  | grep ether | awk '{print $2}'
00:0c:29:97:ce:c7
[root@worker232:1 ~]# ifconfig eth0 | grep ether | awk '{print $2}'
00:0c:29:31:90:93
[root@worker233:0 ~]# ifconfig eth0 | grep ether | awk '{print $2}'
00:0c:29:1a:94:53


[root@master231:1 ~]# cat /sys/class/dmi/id/product_uuid
64ea4d56-6c14-ca1e-54e6-489f5597cec7
[root@worker232:1 ~]# cat /sys/class/dmi/id/product_uuid
a8bc4d56-161c-5efa-1edf-ce270e319093
[root@worker233:0 ~]# cat /sys/class/dmi/id/product_uuid
21fe4d56-5f21-35aa-b83c-9f5b501a9453


#一般来讲,硬件设备会拥有唯一的地址,但是有些虚拟机的地址可能会重复。Kubernetes使用这些值来唯一确定集群中的节点。 如果这些值在每个节点上不唯一,可能会导致安装失败。

4 检查网络节点是否互通

1
[root@master231:1 ~]# ping sina.com -c 10 

5 允许iptable检查桥接流量

1
2
3
cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
1
2
3
4
5
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
1
[root@master231:1 ~]#  sysctl --system

6 检查端口是否被占用

协议 方向 端口范围 目的 使用者
TCP 入站 6443 Kubernetes API 服务器 所有
TCP 入站 2379-2380 etcd 服务器客户端 API kube-apiserver、etcd
TCP 入站 10250 kubelet API 自身、控制面
TCP 入站 10259 kube-scheduler 自身
TCP 入站 10257 kube-controller-manager 自身
协议 方向 端口范围 目的 使用者
TCP 入站 10250 kubelet API 自身、控制面
TCP 入站 10256 kube-proxy 自身、负载均衡器
TCP 入站 30000-32767 NodePort Services† 所有

参考链接: https://kubernetes.io/zh-cn/docs/reference/networking/ports-and-protocols/

检查master节点和worker节点的各组件端口是否被占用。

所有节点修改cgroup的管理进程为systemd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
温馨提示:
在CentOS操作系统中,如果不修改cgroup的管理驱动为systemd,则默认值为cgroupfs,在初始化master节点时会失败

下面的案例是CentOS操作的实例,Ubuntu可以跳过此步骤。
[root@master231 ~]# docker info | grep cgroup
Cgroup Driver: cgroupfs
[root@master231 ~]#
[root@master231 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://tuv7rqqq.mirror.aliyuncs.com","https://docker.mirrors.ustc.edu.cn/","https://hub-mirror.c.163.com/","https://reg-mirror.qiniu.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

[root@master231 ~]# systemctl restart docker
[root@master231 ~]# docker info | grep "Cgroup Driver"
Cgroup Driver: systemd
[root@master231 ~]#

7.安装docker环境(所有节点)

一键安装docker脚本,获取请关注公众号《原来开源》,私信回复 docker-install

1
2
3
使用方法:
安装执行:./install-docker.sh i
卸载执行:./install-docker.sh r
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# 1.更新软件包索引:
sudo apt-get update

# 2.允许APT使用HTTPS:
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common

# 3.添加Docker官方GPG密钥:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# 4.添加Docker的稳定版本仓库:
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# 5.再次更新软件包索引:
sudo apt-get update

# 5.安装Docker CE(社区版):
sudo apt-get install docker-ce

# 6.验证Docker是否安装成功并运行:
sudo systemctl enable --now docker

# 7.或者使用简单的命令检查Docker版本:
docker --version

#温馨提示  以上步骤会安装最新版本的Docker。如果需要安装特定版本的Docker,可以列出可用版本并选择安装:
apt-cache madison docker-cesudo apt-get install docker-ce=<VERSION_STRING>
#替换<VERSION_STRING>为你需要的版本号。

# 8.配置docker镜像加速
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": [
"https://docker.1panel.live",
"https://dockercf.jsdelivr.fyi",
"https://docker-cf.registry.cyou",
"https://docker.chenby.cn",
"https://docker.jsdelivr.fyi",
"https://docker.m.daocloud.io",
"https://docker.m.daocloud.io"
]
}
EOF

# 9.重启docker
[root@worker231 ~]# systemctl daemon-reload
[root@worker231 ~]# systemctl restart docker


#查看docker
[root@master231:1 ~]# docker --version
Docker version 20.10.24, build 297e128
1
2
3
4
5
6
7
8
[root@master231 ~]# docker info  | grep "Cgroup Driver:"
Cgroup Driver: systemd

[root@worker232 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd

[root@worker233 ~]# docker info | grep "Cgroup Driver:"
Cgroup Driver: systemd

8.所有节点安装kubeadm,kubelet,kubectl

1
2
3
4
5
6
7
8
9
10
你需要在每台机器上安装以下的软件包:
kubeadm:
用来初始化K8S集群的工具。
kubelet:
在集群中的每个节点上用来启动Pod和容器等。
kubectl:
用来与K8S集群通信的命令行工具。

kubeadm不能帮你安装或者管理kubelet或kubectl,所以你需要确保它们与通过kubeadm安装的控制平面(master)的版本相匹配。 如果不这样做,则存在发生版本偏差的风险,可能会导致一些预料之外的错误和问题。
然而,控制平面与kubelet间的相差一个次要版本不一致是支持的,但kubelet的版本不可以超过"API SERVER"的版本。 例如,1.7.0版本的kubelet可以完全兼容1.8.0版本的"API SERVER",反之则不可以。

8.1 K8S所有节点配置软件源

apt-get update && apt-get install -y apt-transport-https

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

[root@master231:1 ~]#  apt-get update

8.2 查看一下当前环境支持的k8s版本

1
2
3
4
5
6
7
8
9
[root@master231 ~]# apt-cache madison kubeadm
kubeadm | 1.28.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.28.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
...
kubeadm | 1.23.17-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.16-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.15-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
kubeadm | 1.23.14-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages

8.3 安装 kubelet kubeadm kubectl

1
apt-get -y install kubelet=1.23.17-00 kubeadm=1.23.17-00 kubectl=1.23.17-00

8.4检查各组件版本 是否统一

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master232:1 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:33:14Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
[root@master231:1 ~]#

[root@worker232:1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

[root@worker232:1 ~]# kubelet --version
Kubernetes v1.23.17

#其他两个节点都要检查下,避免你安装的版本不一致!
参考链接:
https://kubernetes.io/zh/docs/tasks/tools/install-kubectl-linux/

9.检查时区 一定要一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master231 ~]# date -R
Mon, 09 Sep 2024 14:58:34 +0800
[root@master231 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai

[root@worker232 ~]# date -R
Mon, 09 Sep 2024 14:59:22 +0800
[root@worker232 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai

[root@worker233 ~]# date -R
Mon, 09 Sep 2024 14:59:35 +0800
[root@worker233 ~]# ll /etc/localtime
lrwxrwxrwx 1 root root 33 Aug 30 15:27 /etc/localtime -> /usr/share/zoneinfo/Asia/Shanghai

9.1修改时区

1
2
3
4
5
[root@master231 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime  

[root@worker233 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

[root@worker232 ~]# ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

初始化K8S的master组件

1 使用kubeadm初始化master节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@master231 ~]# kubeadm init \
--kubernetes-version=v1.23.17 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.100.0.0/16 \
--service-cidr=10.200.0.0/16 \
--service-dns-domain=k8s-service

温馨提示:
相关参数说明在下面
...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.231:6443 --token 8lutg1.3i6jcp7vgd7gpldl \
--discovery-token-ca-cert-hash sha256:e4a5061043bba6e271c502eb6e3fe80bb3555c8f00e42649e08f3939eacdb459
[root@master231 ~]#

注意:你的token跟我不一致,请保存好token,默认保留24小时,因此24小时内你的集群必须启动起来!

master初始化不成功解决问题的方法

1
2
3
4
5
6
7
8
9
10
11
12
可能存在的原因:
- 由于没有禁用swap分区导致无法完成初始化;
- 每个2core以上的CPU导致无法完成初始化;
- 没有手动导入镜像;
解决方案:
- 1.检查上面的是否有上面的情况
free -h
lscpu

- 2.重置当前节点环境
[root@master231 ~]# kubeadm reset -f
- 3.再次尝试初始化master节点

相关参数说明::dango:

--kubernetes-version:
    指定K8S master组件的版本号。

--image-repository:
    指定下载k8s master组件的镜像仓库地址。

--pod-network-cidr:
    指定Pod的网段地址。

--service-cidr:
    指定SVC的网段

--service-dns-domain:
    指定service的域名。若不指定,默认为"cluster.local"。

使用kubeadm初始化集群时,可能会出现如下的输出信息:

[init] 
    使用初始化的K8S版本。
[preflight] 
    主要是做安装K8S集群的前置工作,比如下载镜像,这个时间取决于你的网速。

[certs] 
    生成证书文件,默认存储在"/etc/kubernetes/pki"目录哟。

[kubeconfig]
    生成K8S集群的默认配置文件,默认存储在"/etc/kubernetes"目录哟。

[kubelet-start] 
    启动kubelet,
    环境变量默认写入:"/var/lib/kubelet/kubeadm-flags.env"
    配置文件默认写入:"/var/lib/kubelet/config.yaml"

[control-plane]
    使用静态的目录,默认的资源清单存放在:"/etc/kubernetes/manifests"。
    此过程会创建静态Pod,包括"kube-apiserver","kube-controller-manager"和"kube-scheduler"

[etcd] 
    创建etcd的静态Pod,默认的资源清单存放在:""/etc/kubernetes/manifests"

[wait-control-plane] 
    等待kubelet从资源清单目录"/etc/kubernetes/manifests"启动静态Pod。

[apiclient]
    等待所有的master组件正常运行。

[upload-config] 
    创建名为"kubeadm-config"的ConfigMap在"kube-system"名称空间中。

[kubelet] 
    创建名为"kubelet-config-1.22"的ConfigMap在"kube-system"名称空间中,其中包含集群中kubelet的配置

[upload-certs] 
    跳过此节点,详情请参考”--upload-certs"

[mark-control-plane]
    标记控制面板,包括打标签和污点,目的是为了标记master节点。

[bootstrap-token] 
    创建token口令,例如:"kbkgsa.fc97518diw8bdqid"。
    如下图所示,这个口令将来在加入集群节点时很有用,而且对于RBAC控制也很有用处哟。

[kubelet-finalize] 
    更新kubelet的证书文件信息

[addons] 
    添加附加组件,例如:"CoreDNS"和"kube-proxy”

2 拷贝授权文件,用于管理K8S集群 初始化时候有这些命令

1
2
3
[root@master231 ~]# mkdir -p $HOME/.kube
[root@master231 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master231 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3 查看集群节点

1
2
3
4
5
6
7
8
9
10
11
12
[root@master231:1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
scheduler Healthy ok


[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 117s v1.23.17

部署worker组件

1.在worker节点执行加入的命令 !!!使用自己初始化获得的命令

1
2
3
[root@worker232 ~]# kubeadm join 10.0.0.231:6443 --token 8lutg1.3i6jcp7vgd7gpldl \
--discovery-token-ca-cert-hash sha256:e4a5061043bba6e271c502eb6e3fe80bb3555c8f00e42649e08f3939eacdb459

2.master节点检查集群的worker节点列表

1
2
3
4
5
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 NotReady control-plane,master 32m v1.23.17
worker232 NotReady <none> 36s v1.23.17
worker233 NotReady <none> 25s v1.23.17

温馨提示:
此时K8S组件就算部署成功了,但是将来容器的网络依旧没有准备就绪,因此各节点处于“NotReady”状态。

部署flannel的CNI插件

符合的CNI插件列表选择:
https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/

falnnel的网站:
https://github.com/flannel-io/flannel#deploying-flannel-manually

1.所有节点下载Flannel组件

1
2
3
[root@master231:1 ~]#  wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

#没有魔法比较慢

2.修改Pod的网段即可。

1
2
3
4
5
6
7
8
9
10
11
#将10.244.0.0/16改为10.100.0.0/16  详情见falnnel官方
...
net-conf.json: |
{
"Network": "10.100.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
...

3.安装Flannel组件

1
2
3
4
5
6
7
[root@master231 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

4.检查falnnel各组件是否安装成功

1
2
3
4
5
6
7
[root@master231 ~]# kubectl get pod -o wide -n kube-flannel
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel-ds-7mchd 1/1 Running 0 20s 10.0.0.232 worker232 <none> <none>
kube-flannel-ds-ccwl7 1/1 Running 0 20s 10.0.0.231 master231 <none> <none>
kube-flannel-ds-wzzq9 1/1 Running 0 20s 10.0.0.233 worker233 <none> <none>

`在初始化容器,不可能马上就是running状态,等待一会查看`

5.测试各节点组件

1
2
3
4
5
[root@master231 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master231 Ready control-plane,master 58m v1.23.17
worker232 Ready <none> 26m v1.23.17
worker233 Ready <none> 25m v1.23.17

6.检查flannel.1网卡是否存在

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master231 ~]# ifconfig 
cni0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 10.100.0.1 netmask 255.255.255.0 broadcast 0.0.0.0
inet6 fe80::7498:efff:fe86:d4bc prefixlen 64 scopeid 0x20<link>
ether 3a:28:99:ca:1f:85 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 164 (164.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::d88c:e5ff:fe0f:b4ba prefixlen 64 scopeid 0x20<link>
ether da:8c:e5:0f:b4:ba txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 33 overruns 0 carrier 0 collisions 0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@master232 ~]# ifconfig 
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.1.1 netmask 255.255.255.0 broadcast 10.100.1.255
inet6 fe80::3828:99ff:feca:1f85 prefixlen 64 scopeid 0x20<link>
ether 3a:28:99:ca:1f:85 txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 30 bytes 4343 (4.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.1.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4846:2dff:fe52:2307 prefixlen 64 scopeid 0x20<link>
ether 4a:46:2d:52:23:07 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master233 ~]# ifconfig 
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.2.1 netmask 255.255.255.0 broadcast 10.100.2.255
inet6 fe80::3828:99ff:feca:1f85 prefixlen 64 scopeid 0x20<link>
ether 3a:28:99:ca:1f:85 txqueuelen 1000 (Ethernet)
RX packets 514 bytes 43908 (43.9 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 534 bytes 68678 (68.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
...
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.100.2.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::17:21ff:fe24:c641 prefixlen 64 scopeid 0x20<link>
ether 02:17:21:24:c6:41 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 32 overruns 0 carrier 0 collisions 0

7.fannel网卡cni0网卡缺失

1
2
1问题描述
部分节点不存在cni0网络设备,仅有flannel.1设备,此时我们需要手动创建cni0网桥设备,但是注意网段要一致。
1
2
3
4
5
6
7
8
9
10
手动创建cni0网卡
---> 假设 master231的flannel.1是10.100.0.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.0.1/24 dev cni0

---> 假设 worker232的flannel.1是10.100.1.0网段。
ip link add cni0 type bridge
ip link set dev cni0 up
ip addr add 10.100.2.1/24 dev cni0

验证Pod的CNI网络是否正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
#1.编写Pod资源清单
[root@master231 ~]# cat > network-cni-test.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: xiuxian-v1
spec:
nodeName: worker232
containers:
- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1
name: xiuxian
---

apiVersion: v1
kind: Pod
metadata:
name: xiuxian-v2
spec:
nodeName: worker233
containers:
- image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
name: xiuxian
EOF

#2.创建Pod资源
[root@master231 ~]# kubectl apply -f network-cni-test.yaml
pod/xiuxian-v1 created
pod/xiuxian-v2 created

#3.查看Pod资源你列表
[root@master231 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
xiuxian-v1 1/1 Running 0 11s 10.100.1.2 worker232 <none> <none>
xiuxian-v2 1/1 Running 0 11s 10.100.2.4 worker233 <none> <none>

4.访问worker232节点的服务
[root@master231 ~]# curl 10.100.1.2
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>yinzhengjie apps v1</title>
<style>
div img {
width: 900px;
height: 600px;
margin: 0;
}
</style>
</head>

<body>
<h1 style="color: green">凡人修仙传 v1 </h1>
<div>
<img src="1.jpg">
<div>
</body>

</html>
[root@master231 ~]#
[root@master231 ~]# curl 10.100.2.4
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>yinzhengjie apps v2</title>
<style>
div img {
width: 900px;
height: 600px;
margin: 0;
}
</style>
</head>

<body>
<h1 style="color: red">凡人修仙传 v2 </h1>
<div>
<img src="2.jpg">
<div>
</body>

</html>

kubectl自动补全

1
2
3
4
5
6
7
8
sudo apt install bash-completion
如果您的系统中没有启用 bash-completion,您可以尝试以下命令来启用它:
echo "if [ -f /etc/bash_completion ]; then . /etc/bash_completion; fi" >> ~/.bashrc
source ~/.bashrc

[root@master231 ~]# kubectl completion bash > ~/.kube/completion.bash.inc
[root@master231 ~]# echo source '$HOME/.kube/completion.bash.inc' >> ~/.bashrc
[root@master231 ~]# source ~/.bashrc