k8s集群快速搭建-程序员宅基地

技术标签: docker k8s  docker  

1 创建三台虚拟机

1.1 规划三台虚拟机

k8s-node1 192.168.56.100
k8s-node2 192.168.56.101
k8s-node3 192.168.56.102

1.2 新建Vagrantfile文件

在电脑创建一个文件夹F:\javatool\virtual\guli,新建一个文件Vagrantfile,Vagrantfile的内容如下:

Vagrant.configure("2") do |config|
   (1..3).each do |i|
        config.vm.define "k8s-node#{i}" do |node|
            # 设置虚拟机的Box
            node.vm.box = "centos/7"
​
            # 设置虚拟机的主机名
            node.vm.hostname="k8s-node#{i}"
    
            # 设置虚拟机的IP
            node.vm.network "private_network", ip: "192.168.56.#{99+i}", netmask: "255.255.255.0"
    
            # 设置主机与虚拟机的共享目录
            # node.vm.synced_folder "~/Documents/vagrant/share", "/home/vagrant/share"
    
            # VirtaulBox相关配置
            node.vm.provider "virtualbox" do |v|
                # 设置虚拟机的名称
                v.name = "k8s-node#{i}"
                # 设置虚拟机的内存大小
                v.memory = 4096
                # 设置虚拟机的CPU个数
                v.cpus = 4
            end
        end
​
   end
end

1.3 执行vagrant up命令

用window的CMD命名在Vagrantfile所在目录执行vagrant up,执行需要等待一段时间,执行日志

F:\javatool\virtual\guli>vagrant up
Bringing machine 'k8s-node1' up with 'virtualbox' provider...
Bringing machine 'k8s-node2' up with 'virtualbox' provider...
Bringing machine 'k8s-node3' up with 'virtualbox' provider...
==> k8s-node1: Importing base box 'centos/7'...
==> k8s-node1: Matching MAC address for NAT networking...
==> k8s-node1: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-node1: Setting the name of the VM: k8s-node1
==> k8s-node1: Clearing any previously set network interfaces...
==> k8s-node1: Preparing network interfaces based on configuration...
    k8s-node1: Adapter 1: nat
    k8s-node1: Adapter 2: hostonly
==> k8s-node1: Forwarding ports...
    k8s-node1: 22 (guest) => 2222 (host) (adapter 1)
==> k8s-node1: Running 'pre-boot' VM customizations...
==> k8s-node1: Booting VM...
==> k8s-node1: Waiting for machine to boot. This may take a few minutes...
    k8s-node1: SSH address: 127.0.0.1:2222
    k8s-node1: SSH username: vagrant
    k8s-node1: SSH auth method: private key
    k8s-node1:
    k8s-node1: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-node1: this with a newly generated keypair for better security.
    k8s-node1:
    k8s-node1: Inserting generated public key within guest...
    k8s-node1: Removing insecure key from the guest if it's present...
    k8s-node1: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-node1: Machine booted and ready!
==> k8s-node1: Checking for guest additions in VM...
    k8s-node1: No guest additions were detected on the base box for this VM! Guest
    k8s-node1: additions are required for forwarded ports, shared folders, host only
    k8s-node1: networking, and more. If SSH fails on this machine, please install
    k8s-node1: the guest additions and repackage the box to continue.
    k8s-node1:
    k8s-node1: This is not an error message; everything may continue to work properly,
    k8s-node1: in which case you may ignore this message.
==> k8s-node1: Setting hostname...
==> k8s-node1: Configuring and enabling network interfaces...
==> k8s-node1: Rsyncing folder: /cygdrive/f/javatool/virtual/guli/ => /vagrant
==> k8s-node2: Importing base box 'centos/7'...
==> k8s-node2: Matching MAC address for NAT networking...
==> k8s-node2: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-node2: Setting the name of the VM: k8s-node2
==> k8s-node2: Fixed port collision for 22 => 2222. Now on port 2200.
==> k8s-node2: Clearing any previously set network interfaces...
==> k8s-node2: Preparing network interfaces based on configuration...
    k8s-node2: Adapter 1: nat
    k8s-node2: Adapter 2: hostonly
==> k8s-node2: Forwarding ports...
    k8s-node2: 22 (guest) => 2200 (host) (adapter 1)
==> k8s-node2: Running 'pre-boot' VM customizations...
==> k8s-node2: Booting VM...
==> k8s-node2: Waiting for machine to boot. This may take a few minutes...
    k8s-node2: SSH address: 127.0.0.1:2200
    k8s-node2: SSH username: vagrant
    k8s-node2: SSH auth method: private key
    k8s-node2:
    k8s-node2: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-node2: this with a newly generated keypair for better security.
    k8s-node2:
    k8s-node2: Inserting generated public key within guest...
    k8s-node2: Removing insecure key from the guest if it's present...
    k8s-node2: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-node2: Machine booted and ready!
==> k8s-node2: Checking for guest additions in VM...
    k8s-node2: No guest additions were detected on the base box for this VM! Guest
    k8s-node2: additions are required for forwarded ports, shared folders, host only
    k8s-node2: networking, and more. If SSH fails on this machine, please install
    k8s-node2: the guest additions and repackage the box to continue.
    k8s-node2:
    k8s-node2: This is not an error message; everything may continue to work properly,
    k8s-node2: in which case you may ignore this message.
==> k8s-node2: Setting hostname...
==> k8s-node2: Configuring and enabling network interfaces...
==> k8s-node2: Rsyncing folder: /cygdrive/f/javatool/virtual/guli/ => /vagrant
==> k8s-node3: Importing base box 'centos/7'...
==> k8s-node3: Matching MAC address for NAT networking...
==> k8s-node3: Checking if box 'centos/7' version '2004.01' is up to date...
==> k8s-node3: Setting the name of the VM: k8s-node3
==> k8s-node3: Fixed port collision for 22 => 2222. Now on port 2201.
==> k8s-node3: Clearing any previously set network interfaces...
==> k8s-node3: Preparing network interfaces based on configuration...
    k8s-node3: Adapter 1: nat
    k8s-node3: Adapter 2: hostonly
==> k8s-node3: Forwarding ports...
    k8s-node3: 22 (guest) => 2201 (host) (adapter 1)
==> k8s-node3: Running 'pre-boot' VM customizations...
==> k8s-node3: Booting VM...
==> k8s-node3: Waiting for machine to boot. This may take a few minutes...
    k8s-node3: SSH address: 127.0.0.1:2201
    k8s-node3: SSH username: vagrant
    k8s-node3: SSH auth method: private key
    k8s-node3:
    k8s-node3: Vagrant insecure key detected. Vagrant will automatically replace
    k8s-node3: this with a newly generated keypair for better security.
    k8s-node3:
    k8s-node3: Inserting generated public key within guest...
    k8s-node3: Removing insecure key from the guest if it's present...
    k8s-node3: Key inserted! Disconnecting and reconnecting using new SSH key...
==> k8s-node3: Machine booted and ready!
==> k8s-node3: Checking for guest additions in VM...
    k8s-node3: No guest additions were detected on the base box for this VM! Guest
    k8s-node3: additions are required for forwarded ports, shared folders, host only
    k8s-node3: networking, and more. If SSH fails on this machine, please install
    k8s-node3: the guest additions and repackage the box to continue.
    k8s-node3:
    k8s-node3: This is not an error message; everything may continue to work properly,
    k8s-node3: in which case you may ignore this message.
==> k8s-node3: Setting hostname...
==> k8s-node3: Configuring and enabling network interfaces...
==> k8s-node3: Rsyncing folder: /cygdrive/f/javatool/virtual/guli/ => /vagrant

1.4 虚拟机初始化完成

执行完成后打开Virtual,会发现有三台只在运行中的虚拟机,三台虚拟机的名字分别为k8s-node1,k8s-node2,k8s-nod3。

 

1.5 虚拟机密码访问设置

敲入命令vagrant ssh k8s-node1,切换到root用户su root,会出现输入密码选项,初始密码vargant

使用编辑命令vi /etc/ssh/sshd_config

将文件中的PasswordAuthentication no改成PasswordAuthentication yes,保存修改并退出;

重启一下sshdservice sshd restart

执行一次exit;退出root用户;

执行第二次exit;退出当前虚拟机;

F:\javatool\virtual\guli>vagrant ssh k8s-node1
[vagrant@k8s-node1 ~]$ su root
Password:
[root@k8s-node1 vagrant]# vi /etc/ssh/sshd_config
[root@k8s-node1 vagrant]# service sshd restart
Redirecting to /bin/systemctl restart sshd.service
[root@k8s-node1 vagrant]# exit;
exit
[vagrant@k8s-node1 ~]$ exit;
logout
Connection to 127.0.0.1 closed.

重复上面命令在k8s-node2,k8s-node3中执行。

用xshell连上上面三台虚拟机,虚拟机创建结束。

2 k8s集群搭建前置环境设置

2.1 查看三台机器的ip route

执行命令ip route show

[root@k8s-node1 ~]# ip route show
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101 
​
[root@k8s-node2 ~]# ip route show
default via 10.0.2.2 dev eth0 proto dhcp metric 101 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 101 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101 metric 100
​
[root@k8s-node3 ~]# ip route show
default via 10.0.2.2 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.102 metric 101 

发现三台虚拟机使用的都是eth0网卡,ip也都是一样的10.0.2.15,这是由于网络默认采用的是网络地址转换(NAT)模式。

我们需要修改这种网络模式,打开virtual,选择管理->全局设定(P)->网络->新增新NAT网络

 

接着需要给每台虚拟机设置网络连接方式,连接方式为NAT网络,界面名称为刚才创建的网络名称,刷新MAC地址

执行命令ip route show

[root@k8s-node1 ~]# ip route show
default via 10.0.2.1 dev eth0 proto dhcp metric 100 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 100 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 101 
​
[root@k8s-node2 ~]# ip route show
default via 10.0.2.1 dev eth0 proto dhcp metric 101 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.4 metric 101 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.101 metric 100 
​
[root@k8s-node3 ~]# ip route show
default via 10.0.2.1 dev eth0 proto dhcp metric 101 
10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.5 metric 101 
192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.102 metric 100

发现三台虚拟的网卡地址不一样了。

三台机器ip地址相互ping一下,测试网络是否通畅;三台机器都ping一下www.baidu.com测试外网是否能ping通。

2.2 设置linux环境

# (1)关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
​
# (2)关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
​
# (3)关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
​
# (4)配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
​
# (5)设置系统参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
​
sysctl --system

添加主机名与IP对应关系

vi /etc/hosts

10.0.2.15 k8s-node1

10.0.2.4 k8s-node2

10.0.2.5 k8s-node3

hostnamectl set-hostname <newhostname>

2.3 安装docker

01 卸载之前的docker
    sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
                  
02 安装必要的依赖
    sudo yum install -y yum-utils \
    device-mapper-persistent-data \
    lvm2
    
03 设置docker仓库  [设置阿里云镜像仓库可以先自行百度,后面课程也会有自己的docker hub讲解]    
    sudo yum-config-manager \
      --add-repo \
      https://download.docker.com/linux/centos/docker-ce.repo
      
    [访问这个地址,使用自己的阿里云账号登录,查看菜单栏左下角,发现有一个镜像加速器:https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors]
​
04 安装docker
    sudo yum install -y docker-ce docker-ce-cli containerd.io
    
05 启动docker
    sudo systemctl start docker
    
06 测试docker安装是否成功
    sudo docker run hello-world
    
07 设置docker开机启动
    sudo systemctl enable docker

2.4 设置镜像加速器

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://22hf0lkd.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

2.5 Installing kubeadm, kubelet and kubectl

2.5.1 配置yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2.5.2 安装kubeadm&kubelet&kubectl

yum list kubelet --showduplicates | sort -r
yum install -y kubeadm-1.17.3 kubelet-1.17.3 kubectl-1.17.3

2.5.3 docker和k8s设置同一个cgroup

# docker
vi /etc/docker/daemon.json
    "exec-opts": ["native.cgroupdriver=systemd"],
    
systemctl restart docker
    
# kubelet,这边如果发现输出directory not exist,也说明是没问题的,大家继续往下进行即可
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    
systemctl enable kubelet && systemctl start kubelet

2.5.4 初始化master节点

官网:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

注意此操作是在主节点上进行如果没有镜像先执行master_images.sh

kubeadm init --kubernetes-version=1.17.3 --apiserver-advertise-address=10.0.2.15  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16
【若要重新初始化集群状态:kubeadm reset,然后再进行上述操作】
#!/bin/bash
​
images=(
        kube-apiserver:v1.17.3
    kube-proxy:v1.17.3
        kube-controller-manager:v1.17.3
        kube-scheduler:v1.17.3
        coredns:1.6.5
        etcd:3.4.3-0
    pause:3.1
)
​
for imageName in ${images[@]} ; do
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
done
~       

主节点安装成功日志

Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 10.0.2.15:6443 --token njegi6.oj7rc4x6agiu1go2 \
    --discovery-token-ca-cert-hash sha256:c282026afc2e329f4f80f6793966a906a75940bee4daeb36261ea383f69b4154 

按照提示执行

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络 插件,执行命令kubectl apply -f kube-flannel.yml

下面是kube-flannel.yml的内容

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: jmgao1983/flannel:v0.11.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: jmgao1983/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: beta.kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: beta.kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

执行完成后查询是否安装成功,需要等待3分钟左右

[root@k8s-node1 k8s]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-6955765f44-2qrmj            1/1     Running   0          14m
kube-system   coredns-6955765f44-s4t2r            1/1     Running   0          14m
kube-system   etcd-k8s-node1                      1/1     Running   0          14m
kube-system   kube-apiserver-k8s-node1            1/1     Running   0          14m
kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          14m
kube-system   kube-flannel-ds-amd64-nlwhz         1/1     Running   0          2m4s
kube-system   kube-proxy-ml92v                    1/1     Running   0          14m
kube-system   kube-scheduler-k8s-node1            1/1     Running   0          14m

当都是running状态代表成功。

2.5.5 nodes节点加入master

执行加入命令

kubeadm join 10.0.2.15:6443 --token njegi6.oj7rc4x6agiu1go2 \
    --discovery-token-ca-cert-hash sha256:c282026afc2e329f4f80f6793966a906a75940bee4daeb36261ea383f69b4154

此过程比较耗时,可以执行kubectl get pod -n kube-system -o wide查看状态

[root@k8s-node1 k8s]# kubectl get pod -n kube-system -o wide
NAME                                READY   STATUS              RESTARTS   AGE    IP           NODE        NOMINATED NODE   READINESS GATES
coredns-6955765f44-2qrmj            1/1     Running             0          27m    10.244.0.2   k8s-node1   <none>           <none>
coredns-6955765f44-s4t2r            1/1     Running             0          27m    10.244.0.3   k8s-node1   <none>           <none>
etcd-k8s-node1                      1/1     Running             0          27m    10.0.2.15    k8s-node1   <none>           <none>
kube-apiserver-k8s-node1            1/1     Running             0          27m    10.0.2.15    k8s-node1   <none>           <none>
kube-controller-manager-k8s-node1   1/1     Running             0          27m    10.0.2.15    k8s-node1   <none>           <none>
kube-flannel-ds-amd64-dhnt8         0/1     Init:0/1            0          108s   10.0.2.5     k8s-node3   <none>           <none>
kube-flannel-ds-amd64-nlwhz         1/1     Running             0          15m    10.0.2.15    k8s-node1   <none>           <none>
kube-flannel-ds-amd64-zqhzv         0/1     Init:0/1            0          115s   10.0.2.4     k8s-node2   <none>           <none>
kube-proxy-cvj7d                    0/1     ContainerCreating   0          108s   10.0.2.5     k8s-node3   <none>           <none>
kube-proxy-m28l8                    0/1     ContainerCreating   0          115s   10.0.2.4     k8s-node2   <none>           <none>
kube-proxy-ml92v                    1/1     Running             0          27m    10.0.2.15    k8s-node1   <none>           <none>
kube-scheduler-k8s-node1            1/1     Running             0          27m    10.0.2.15    k8s-node1   <none>           <none>
[root@k8s-node1 k8s]# kubectl get nodes
NAME        STATUS   ROLES    AGE     VERSION
k8s-node1   Ready    master   12m     v1.17.3
k8s-node2   Ready    <none>   9m44s   v1.17.3
k8s-node3   Ready    <none>   6m35s   v1.17.3
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/qq_36305027/article/details/110304239

智能推荐

while循环&CPU占用率高问题深入分析与解决方案_main函数使用while(1)循环cpu占用99-程序员宅基地

文章浏览阅读3.8k次,点赞9次,收藏28次。直接上一个工作中碰到的问题,另外一个系统开启多线程调用我这边的接口,然后我这边会开启多线程批量查询第三方接口并且返回给调用方。使用的是两三年前别人遗留下来的方法,放到线上后发现确实是可以正常取到结果,但是一旦调用,CPU占用就直接100%(部署环境是win server服务器)。因此查看了下相关的老代码并使用JProfiler查看发现是在某个while循环的时候有问题。具体项目代码就不贴了,类似于下面这段代码。​​​​​​while(flag) {//your code;}这里的flag._main函数使用while(1)循环cpu占用99

【无标题】jetbrains idea shift f6不生效_idea shift +f6快捷键不生效-程序员宅基地

文章浏览阅读347次。idea shift f6 快捷键无效_idea shift +f6快捷键不生效

node.js学习笔记之Node中的核心模块_node模块中有很多核心模块,以下不属于核心模块,使用时需下载的是-程序员宅基地

文章浏览阅读135次。Ecmacript 中没有DOM 和 BOM核心模块Node为JavaScript提供了很多服务器级别,这些API绝大多数都被包装到了一个具名和核心模块中了,例如文件操作的 fs 核心模块 ,http服务构建的http 模块 path 路径操作模块 os 操作系统信息模块// 用来获取机器信息的var os = require('os')// 用来操作路径的var path = require('path')// 获取当前机器的 CPU 信息console.log(os.cpus._node模块中有很多核心模块,以下不属于核心模块,使用时需下载的是

数学建模【SPSS 下载-安装、方差分析与回归分析的SPSS实现(软件概述、方差分析、回归分析)】_化工数学模型数据回归软件-程序员宅基地

文章浏览阅读10w+次,点赞435次,收藏3.4k次。SPSS 22 下载安装过程7.6 方差分析与回归分析的SPSS实现7.6.1 SPSS软件概述1 SPSS版本与安装2 SPSS界面3 SPSS特点4 SPSS数据7.6.2 SPSS与方差分析1 单因素方差分析2 双因素方差分析7.6.3 SPSS与回归分析SPSS回归分析过程牙膏价格问题的回归分析_化工数学模型数据回归软件

利用hutool实现邮件发送功能_hutool发送邮件-程序员宅基地

文章浏览阅读7.5k次。如何利用hutool工具包实现邮件发送功能呢?1、首先引入hutool依赖<dependency> <groupId>cn.hutool</groupId> <artifactId>hutool-all</artifactId> <version>5.7.19</version></dependency>2、编写邮件发送工具类package com.pc.c..._hutool发送邮件

docker安装elasticsearch,elasticsearch-head,kibana,ik分词器_docker安装kibana连接elasticsearch并且elasticsearch有密码-程序员宅基地

文章浏览阅读867次,点赞2次,收藏2次。docker安装elasticsearch,elasticsearch-head,kibana,ik分词器安装方式基本有两种,一种是pull的方式,一种是Dockerfile的方式,由于pull的方式pull下来后还需配置许多东西且不便于复用,个人比较喜欢使用Dockerfile的方式所有docker支持的镜像基本都在https://hub.docker.com/docker的官网上能找到合..._docker安装kibana连接elasticsearch并且elasticsearch有密码

随便推点

Python 攻克移动开发失败!_beeware-程序员宅基地

文章浏览阅读1.3w次,点赞57次,收藏92次。整理 | 郑丽媛出品 | CSDN(ID:CSDNnews)近年来,随着机器学习的兴起,有一门编程语言逐渐变得火热——Python。得益于其针对机器学习提供了大量开源框架和第三方模块,内置..._beeware

Swift4.0_Timer 的基本使用_swift timer 暂停-程序员宅基地

文章浏览阅读7.9k次。//// ViewController.swift// Day_10_Timer//// Created by dongqiangfei on 2018/10/15.// Copyright 2018年 飞飞. All rights reserved.//import UIKitclass ViewController: UIViewController { ..._swift timer 暂停

元素三大等待-程序员宅基地

文章浏览阅读986次,点赞2次,收藏2次。1.硬性等待让当前线程暂停执行,应用场景:代码执行速度太快了,但是UI元素没有立马加载出来,造成两者不同步,这时候就可以让代码等待一下,再去执行找元素的动作线程休眠,强制等待 Thread.sleep(long mills)package com.example.demo;import org.junit.jupiter.api.Test;import org.openqa.selenium.By;import org.openqa.selenium.firefox.Firefox.._元素三大等待

Java软件工程师职位分析_java岗位分析-程序员宅基地

文章浏览阅读3k次,点赞4次,收藏14次。Java软件工程师职位分析_java岗位分析

Java:Unreachable code的解决方法_java unreachable code-程序员宅基地

文章浏览阅读2k次。Java:Unreachable code的解决方法_java unreachable code

标签data-*自定义属性值和根据data属性值查找对应标签_如何根据data-*属性获取对应的标签对象-程序员宅基地

文章浏览阅读1w次。1、html中设置标签data-*的值 标题 11111 222222、点击获取当前标签的data-url的值$('dd').on('click', function() { var urlVal = $(this).data('ur_如何根据data-*属性获取对应的标签对象

推荐文章

热门文章

相关标签