运维自动化,减少重复工作,降低知识传递成本,使运维交付更高效、更安全,使产品运行更稳定。对于故障的处理,由事后处理变成提前发现,人工处理变成系统自动容灾。

Centos8安装kubernetes集群及仓库Harbor安装配置,Helm安装

Linux运维 51geeks 31℃ 0评论

K8S集群安装

1 准备阶段

Habor :创库创建

K8s-master01 ,k8s-node01,k8s-node02 kubeadm

Router 软路由,用于连接google

2 集群安装

192.168.253.167 k8s-master 192.168.253.168 k8s-node01 192.168.253.169 k8s-node02

设置系统主机名以及Host文件相互解析

hostnamectl set-hostname --static k8s-masterhostnamectl set-hostname --static k8s-node01hostnamectl set-hostname --static k8s-node02​
vim /etc/hosts192.168.253.167 k8s-master192.168.253.168 k8s-node01192.168.253.169 k8s-node02

安装依赖关系

yum install -y vim wget net-tools gityum install lrzsz  --nogpgcheck

设置防火墙为Iptables并设置空规则

systemctl stop firewalld && systemctl disable firewalldyum -y install iptables.services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save;(redhat7使用)

关闭swap分区和selinux —执行成功

swapoff  -a  &&  sed -i '  / swap / s/^\(.*\)$/#\1/g'  /etc/fstabsetenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

调整内核参数,对于k8s

cat > kubernetes.conf << EOFnet.bridge.bridge-nf-call-iptables=1  #未执行net.bridge.bridge-nf-call-ip6tables=1 #未执行net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0 #未执行vm.swappiness=0 #禁止使用swap空间,只有当系统OOM时才可以使用它vm.overcommit_memory=1 #不检查物理内存是否够用vm.panic_on_oom=0 #开启OOMfs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max=52706963fs.nr_open=52706963net.ipv6.conf.all.disable_ipv6=1net.netfilter.nf_conntrack_max=2310720EOF​cp  kubernetes.conf /etc/sysctl.d/kubernetes.confsysctl -p  /etc/sysctl.d/kubernetes.conf

调整系统时区—执行成功

#设置系统时区为中国/上海```timedateclt set-timezone Asia/Shanghai```​#将当前的UTC时间写入硬件时钟```timedatectl set-local-rtc 0```​#重启依赖于系统时间的服务```systemctl restart rsyslog systemctl restart crond```

关闭系统不需要的服务

systemctl stop postfix && systemctl disable postfix #未执行

设置rsyslogd 和systemd journald—未执行成功

mkdir /var/log/journal 持久化保存日志目录mkdir /etc/systemd/journald.conf.dcat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF[Journal]#持久化保存到磁盘Storage-persistent#压缩历史日志Compress=yesSyncIntervalSec=5mRateLimitBurst=1000#最大占用空间10GSystemMaxUse=10G单日志文件最大200MSystemMaxFileSize=200M日志保存时间2周MaxRetentionSec=2week#不将日志转发到syslogForwardToSyslog=noEOFsystemctl restart systemd-jounald

升级内核版本:—未执行

rpm -Uvh https://mirrors.tuna.tsinghua.edu.cn/elrepo/kernel/el8/x86_64/RPMS/elrepo-release-8.1-1.el8.elrepo.noarch.rpm#安装完成后检查/boot/grub2/grub.cfg中对应的内核menuentry 中是否包含initrd16配置,如果没有,再安装一次。yum --enablerepo=elrepo-kernel install -y kernel-lt#设置开机从新内核启动grub2-set-default "Centos Linux (4.4.182-1.el7.elrepo.x86_64) 7 (CORE)"

kube-proxy开启ipvs的前置条件—执行成功

在所有节点上安装ipset软件包,为了方便查看ipvs规则我们要安装ipvsadm(可选)yum install ipset -yyum install ipvsadm -y​modprobe br_netfiltercat  >/etc/sysconfig/modules/ipvs.modules <<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs-rrmodprobe -- ip_vs-wrrmodprobe -- ip_vs-shmodprobe -- nf_conntrack_ipv4EOFchmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod |grep -e ip_vs -e nf_conntrack_ipv4

安装docker软件–执行成功

# step 1: 安装必要的一些系统工具sudo yum install -y yum-utils device-mapper-persistent-data lvm2# Step 2: 添加软件源信息sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo先安装containerd.ioyum install -y containerd.iodnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm#sudo yum -y install docker-ce# Step 4: 开启Docker服务sudo service docker start && systemctl enable docker​# 注意:# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。# vim /etc/yum.repos.d/docker-ee.repo#   将[docker-ce-test]下方的enabled=0修改为enabled=1## 安装指定版本的Docker-CE:# Step 1: 查找Docker-CE的版本:# yum list docker-ce.x86_64 --showduplicates | sort -r#   Loading mirror speeds from cached hostfile#   Loaded plugins: branch, fastestmirror, langpacks#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable#   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable#   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable#   Available Packages# Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)# sudo yum -y install docker-ce-[VERSION]docker version

创建/etc/docker目录

mkdir /etc/docker​cat  >/etc/docker/daemon.json << EOF{  "exec-opts": ["native.cgroupdriver=systemd"],  "log-driver": "json-file",  "log-opts": {   "max-size": "100m"  }}EOFmkdir -p /etc/systemd/system/docker.service.d

重启Docker服务

systemctl daemon-reload && systemctl restart docker && systemctl enable docker 

安装kubeadm (主从配置 )

cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOFsetenforce 0yum install -y kubelet kubeadm kubectlsystemctl enable kubelet && systemctl start kubeletps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装systemctl enable kubelet.service

初始主节点:

 查看镜像版本列表kubeadm config images listk8s.gcr.io/kube-apiserver:v1.18.2k8s.gcr.io/kube-controller-manager:v1.18.2k8s.gcr.io/kube-scheduler:v1.18.2k8s.gcr.io/kube-proxy:v1.18.2k8s.gcr.io/pause:3.2k8s.gcr.io/etcd:3.4.3-0k8s.gcr.io/coredns:1.6.7

拉起镜像:

科学上网拉起镜像:docker  pull k8s.gcr.io/kube-apiserver:v1.18.2docker  pull k8s.gcr.io/kube-controller-manager:v1.18.2docker  pull k8s.gcr.io/kube-scheduler:v1.18.2docker  pull  k8s.gcr.io/kube-proxy:v1.18.2docker pull  k8s.gcr.io/pause:3.2docker pull  k8s.gcr.io/etcd:3.4.3-0docker pull  k8s.gcr.io/coredns:1.6.7

保存镜像:

docker save k8s.gcr.io/kube-proxy -o kube-proxy.tardocker save k8s.gcr.io/kube-apiserver -o kube-apiserver.tardocker save k8s.gcr.io/kube-scheduler -o kube-scheduler.tardocker save k8s.gcr.io/kube-controller-manager -o kube-kube-controller-manager.tardocker save k8s.gcr.io/pause -o pause.tardocker save k8s.gcr.io/etcd -o /etcd.tardocker save k8s.gcr.io/etcd -o etcd.tardocker save k8s.gcr.io/coredns -o coredns.tar​docker save quay.io/coreos/flannel  -o flannel.tar
vim load-images.sh#!/bin/bashls /root/kubeadm-basic.images > /tmp/images_list.txtcd /root/kubeadm-basic.imagesfor i in $( cat /tmp/images_list.txt )do   docker load -i $idonerm -rf /tmp/images_list.txtchmod a+x load-images.shbash load-images.sh​

初始化主节点:

创建init配置文件:kubeadm config print init-defaults > kubeadm-config.yaml​编辑配置文件(打印出来 vim):vim kubeadm-config.yamllocalAPIEndpoint:  advertiseAddress: 192.168.253.167 #修改advertiseAddress: 192.168.253.167 为master地址  kubernetesVersion: v1.18.2  podSubnet: "10.244.0.0/16"​apiVersion: kubeproxy.config.k8s.io/v1alpha1  #修改为ipvs模式kind: KubeProxyConfigurationfeatureGates:    SupportIPVSProxyMode: truemode: ipvs​初始化(初始化,需要docker在启动状态):kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log​
报错信息1:警告,docker非开机启动根据提示的命令执行一下即可。[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'systemctl enable docker.service​报错信息2:# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.logError: unknown flag: --experimental-upload-certs​报错信息3: docker 的驱动程序是cgroupfs推荐设置为systemd​[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ ​添加下面的配置即可#vim /etc/docker/daemon.json{"registry-mirrors": ["https://wv1h618x.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "100m"}}重启#systemctl daemon-reload && systemctl restart docker && systemctl enable docker ​​# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.logW0502 20:39:04.447334   29833 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io][init] Using Kubernetes version: v1.18.2[preflight] Running pre-flight checks    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/    [WARNING FileExisting-tc]: tc not found in system path[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.253.167][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.253.167 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.253.167 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"W0502 20:39:18.493122   29833 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-controller-manager"W0502 20:39:18.502456   29833 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 28.032749 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:e04dd9c52332d23bab221c18d59016fb3789d7f39c8529681b7cc476707c1380[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxy​Your Kubernetes control-plane has initialized successfully!​To start using your cluster, you need to run the following as a regular user:​  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config​You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/​Then you can join any number of worker nodes by running the following on each as root:​kubeadm join 192.168.253.167:6443 --token abcdef.0123456789abcdef \    --discovery-token-ca-cert-hash sha256:289495fa530177884fc6606727728625102df14b8dd042586cb8de61051da0e8     启动成功,注意在启动成功的时候下面会有提示信息让我们进行相关的操作,并且进入node的命令也在里面:    加入主节点以及其余工作节点:​ mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config​如果有些生成的配置被我们改乱掉了也可以重新来过。kubeadm reset​查看生产的证书:[root@k8s-master ~]# cd /etc/kubernetes/pki/[root@k8s-master pki]# lsapiserver.crt              apiserver-etcd-client.key  apiserver-kubelet-client.crt  ca.crt  etcd                front-proxy-ca.key      front-proxy-client.key  sa.pubapiserver-etcd-client.crt  apiserver.key              apiserver-kubelet-client.key  ca.key  front-proxy-ca.crt  front-proxy-client.crt  sa.key​查看node节点:[root@k8s-master pki]# kubectl get nodeNAME         STATUS     ROLES    AGE     VERSIONk8s-master   NotReady   master   6m33s   v1.18.2注意:发现master 为 notready 查看pod后发现是网络错误,也就是flannel没有安装,我们拉取一下flannel的镜像。

网络部署:

k8的网络插件kube-flannel.yml 最新版本的yaml文件wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl create -f kube-flannel.yml ​[root@k8s-master flannel]# kubectl create -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds-amd64 createddaemonset.apps/kube-flannel-ds-arm64 createddaemonset.apps/kube-flannel-ds-arm createddaemonset.apps/kube-flannel-ds-ppc64le createddaemonset.apps/kube-flannel-ds-s390x created​[root@k8s-master flannel]# kubectl get nodeNAME         STATUS   ROLES    AGE   VERSIONk8s-master   Ready    master   36m   v1.18.2[root@k8s-master flannel]# kubectl get pod -n kube-systemNAME                                 READY   STATUS    RESTARTS   AGEcoredns-66bff467f8-4pzqn             1/1     Running   0          35mcoredns-66bff467f8-bw2b4             1/1     Running   0          35metcd-k8s-master                      1/1     Running   0          36mkube-apiserver-k8s-master            1/1     Running   0          36mkube-controller-manager-k8s-master   1/1     Running   0          36mkube-flannel-ds-amd64-kf7j7          1/1     Running   0          18mkube-proxy-g2hlg                     1/1     Running   0          35mkube-scheduler-k8s-master            1/1     Running   0          36m​查看POD配置:kubectl describe pod kube-flannel-ds-amd64-k92bk -n kube-system

加入其它节点:

准备工作相同,在node节点我们也将k8s下载好,之后使用我们在init 时候自动生成的 join命令添加到node中即可:​kubeadm join 192.168.253.167:6443 --token abcdef.0123456789abcdef \    --discovery-token-ca-cert-hash sha256:289495fa530177884fc6606727728625102df14b8dd042586cb8de61051da0e8 ​# kubectl get pod -n kube-system -o wide# kubectl get node# kubectl get pod -n kube-system# kubectl get pod -n kube-system -w初始时错误:因为无镜像    kube-flannel-ds-amd64-k92bk          0/1     Init:0/1   0          4m3s    kube-flannel-ds-amd64-k92bk          0/1     Init:ImagePullBackOff   0          5m15s    kube-flannel-ds-amd64-k92bk          0/1     Init:ErrImagePull       0          6m39s

Harbor安装–仓库配置: Harbor是一个开源的镜像仓库,harbor官网:https://goharbor.io/

初始设置:hostnamectl set-hostname --static k8s-haboryum install -y vim wget net-tools gityum install lrzsz  --nogpgchecksystemctl stop firewalld && systemctl disable firewalldswapoff  -a  &&  sed -i '  / swap / s/^\(.*\)$/#\1/g'  /etc/fstabsetenforce 0 && sed -i  's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config​编辑docker配置:vim /etc/docker/daemon.json ​master:cat  >/etc/docker/daemon.json << EOF{"registry-mirrors": ["https://y7guluho.mirror.aliyuncs.com"],"insecure-registries": ["https://hub.51geeks.com"]}EOF   ​nodes:cat  >/etc/docker/daemon.json << EOF{  "exec-opts": ["native.cgroupdriver=systemd"],  "log-driver": "json-file",  "log-opts": {   "max-size": "100m"  },  "insecure-registries": ["https://hub.51geeks.com"]}EOF3台服务器都需要添加​

准备条件:

环境软件版本下载地址备注
系统Centos8.1
docker19.03.8
docker-componse1.25.5https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)
harbor1.10.2https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgz

安装docker

$ yum remove docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-selinux docker-engine-selinux docker-engine            $ yum install -y yum-utils device-mapper-persistent-data lvm2$ yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo$ yum-config-manager --enable docker-ce-edge$ yum install -y docker-ce$ systemctl start docker$ systemctl enable docker

安装docker-componse

安装帮助:https://docs.docker.com/compose/install/

https://github.com/docker/compose/
  1. sudo curl -L “https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-composesudo chmod +x /usr/local/bin/docker-composesudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-composedocker-compose –version

安装harbor

wget -c  https://github.com/goharbor/harbor/releases/download/v1.10.2/harbor-offline-installer-v1.10.2.tgztar zxvf harbor-offline-installer-v1.10.2.tgzcd harbor

配置harbor.yml

$vim harbor.ymlhostname: hub.51geeks.comhttp:    port: 80https:    port: 443    certificate: /data/cert/server.crt    private_key: /data/cert/server.keyharbor_admin_password: Harbor12345 #  Web端admin用户密码database:      password: root123data_volumn: /data​

创建证书:

创建私钥:openssl genrsa -des3 -out server.key 2048创建证书请求:openssl req -new -key server.key -out server.csr备份私钥:cp server.key server.key.org退出密码:openssl rsa -in  server.key.org -out server.key证书签名:openssl x509 -req -days 365 -in server.csr -signkey  server.key -out server.crt chmod a+x *mkdir /data/certchmod -R 777 /data/cert​追加解析1个master,2nodes,1harbor:echo "192.168.253.170 hub.51geeks.com" >> /etc/hosts192.168.253.167 k8s-master192.168.253.168 k8s-node01192.168.253.169 k8s-node02192.168.253.170 hub.51geeks.com

安装harbor

$ ./install.sh​[root@k8s-habor harbor]# ./install.sh ​[Step 0]: checking if docker is installed ...​Note: docker version: 19.03.8​[Step 1]: checking docker-compose is installed ...​Note: docker-compose version: 1.25.5​[Step 2]: loading Harbor images ...ad1dca7cdecb: Loading layer [==================================================>]   34.5MB/34.5MBfe0efe3b32dc: Loading layer [==================================================>]  63.56MB/63.56MB5504ea8a1c89: Loading layer [==================================================>]  58.39MB/58.39MBe5fe51919fa7: Loading layer [==================================================>]  5.632kB/5.632kB5591c247d2e6: Loading layer [==================================================>]  2.048kB/2.048kBdb6a70d4a66e: Loading layer [==================================================>]   2.56kB/2.56kBa898589079d4: Loading layer [==================================================>]   2.56kB/2.56kBa45af9651ff3: Loading layer [==================================================>]   2.56kB/2.56kBbe9c1b049bcc: Loading layer [==================================================>]  10.24kB/10.24kBLoaded image: goharbor/harbor-db:v1.10.2346fb2bd57a4: Loading layer [==================================================>]  8.435MB/8.435MB2e3e5d2fc1dd: Loading layer [==================================================>]  6.239MB/6.239MBef4f6d3760d4: Loading layer [==================================================>]  16.04MB/16.04MBc72e6e471644: Loading layer [==================================================>]  28.25MB/28.25MB8ef2ab5918ad: Loading layer [==================================================>]  22.02kB/22.02kB8c6f27a03a6c: Loading layer [==================================================>]  50.52MB/50.52MBLoaded image: goharbor/notary-server-photon:v1.10.26d0fd267be6a: Loading layer [==================================================>]  115.2MB/115.2MBcc6a0cb3722a: Loading layer [==================================================>]  12.14MB/12.14MB2df571d6ea95: Loading layer [==================================================>]  3.072kB/3.072kB9971e5655191: Loading layer [==================================================>]  49.15kB/49.15kB10c405f9f0e2: Loading layer [==================================================>]  3.584kB/3.584kB6861c00be6c7: Loading layer [==================================================>]  13.02MB/13.02MBLoaded image: goharbor/clair-photon:v1.10.21826656409e9: Loading layer [==================================================>]  10.28MB/10.28MB8cdf4e864764: Loading layer [==================================================>]  7.697MB/7.697MB15824ca72188: Loading layer [==================================================>]  223.2kB/223.2kB16130654d1d1: Loading layer [==================================================>]  195.1kB/195.1kBf3ed25db3f03: Loading layer [==================================================>]  15.36kB/15.36kB3580b56fee01: Loading layer [==================================================>]  3.584kB/3.584kBLoaded image: goharbor/harbor-portal:v1.10.2a6d6e26561c2: Loading layer [==================================================>]  12.21MB/12.21MB86ec36cec073: Loading layer [==================================================>]   42.5MB/42.5MBa834e5c5df07: Loading layer [==================================================>]  5.632kB/5.632kBd74d9eba8546: Loading layer [==================================================>]  40.45kB/40.45kB6d5eed6f3419: Loading layer [==================================================>]   42.5MB/42.5MB484994b6bc3f: Loading layer [==================================================>]   2.56kB/2.56kBLoaded image: goharbor/harbor-core:v1.10.28b67d91d471e: Loading layer [==================================================>]  12.21MB/12.21MB2584449c95d0: Loading layer [==================================================>]  49.37MB/49.37MBLoaded image: goharbor/harbor-jobservice:v1.10.2b23fa00ea843: Loading layer [==================================================>]  8.441MB/8.441MBb2c0f9d70915: Loading layer [==================================================>]  3.584kB/3.584kBb503c86a04d4: Loading layer [==================================================>]  21.76MB/21.76MBb360fa5431c1: Loading layer [==================================================>]  3.072kB/3.072kBeb575ebe03ac: Loading layer [==================================================>]  8.662MB/8.662MB80fb2b0f0315: Loading layer [==================================================>]  31.24MB/31.24MBLoaded image: goharbor/harbor-registryctl:v1.10.21358663a68ec: Loading layer [==================================================>]  82.23MB/82.23MB711a7d4ecee3: Loading layer [==================================================>]  3.072kB/3.072kB5bb647da1c5e: Loading layer [==================================================>]   59.9kB/59.9kB57ea330779ba: Loading layer [==================================================>]  61.95kB/61.95kBLoaded image: goharbor/redis-photon:v1.10.2dd582a00d0e4: Loading layer [==================================================>]  10.28MB/10.28MBLoaded image: goharbor/nginx-photon:v1.10.2f4ce9d4c5979: Loading layer [==================================================>]   8.44MB/8.44MB4df17639d73c: Loading layer [==================================================>]   42.3MB/42.3MB06a92309fcf7: Loading layer [==================================================>]  3.072kB/3.072kB6961179c06b3: Loading layer [==================================================>]  3.584kB/3.584kB24058aa4795e: Loading layer [==================================================>]  43.12MB/43.12MBLoaded image: goharbor/chartmuseum-photon:v1.10.228bdd74b7611: Loading layer [==================================================>]  49.82MB/49.82MB312844c67ef0: Loading layer [==================================================>]  3.584kB/3.584kB97ff7939d09c: Loading layer [==================================================>]  3.072kB/3.072kBfe1ca6ca62b1: Loading layer [==================================================>]   2.56kB/2.56kB807185e8884e: Loading layer [==================================================>]  3.072kB/3.072kB7014ac08f821: Loading layer [==================================================>]  3.584kB/3.584kBb9a09e8231aa: Loading layer [==================================================>]  12.29kB/12.29kBLoaded image: goharbor/harbor-log:v1.10.25fc142634b19: Loading layer [==================================================>]  8.441MB/8.441MB6d25b55ca036: Loading layer [==================================================>]  3.584kB/3.584kB470e0bc7c886: Loading layer [==================================================>]  3.072kB/3.072kB6deec48d670d: Loading layer [==================================================>]  21.76MB/21.76MB4b0f50c1f9a2: Loading layer [==================================================>]  22.59MB/22.59MBLoaded image: goharbor/registry-photon:v1.10.27c0c9681bb5c: Loading layer [==================================================>]  14.61MB/14.61MBf8f5185485f0: Loading layer [==================================================>]  28.25MB/28.25MB7aa4e440ddd4: Loading layer [==================================================>]  22.02kB/22.02kB1bf5d3e32ab4: Loading layer [==================================================>]  49.09MB/49.09MBLoaded image: goharbor/notary-signer-photon:v1.10.2e5f331e45d1c: Loading layer [==================================================>]  337.3MB/337.3MBe0d97714dc5d: Loading layer [==================================================>]  135.2kB/135.2kBLoaded image: goharbor/harbor-migrator:v1.10.26b5627387d23: Loading layer [==================================================>]  77.91MB/77.91MB6d898f9318cc: Loading layer [==================================================>]  48.28MB/48.28MB3e9ed699ea3e: Loading layer [==================================================>]   2.56kB/2.56kB3bc549d11dcc: Loading layer [==================================================>]  1.536kB/1.536kB74fd1d3f8fa2: Loading layer [==================================================>]  157.2kB/157.2kB547fd9c0c9c5: Loading layer [==================================================>]   2.81MB/2.81MBLoaded image: goharbor/prepare:v1.10.29d7087c5277a: Loading layer [==================================================>]  8.441MB/8.441MBc0f8862cab3f: Loading layer [==================================================>]   9.71MB/9.71MBa9e3fbb9bcfc: Loading layer [==================================================>]   9.71MB/9.71MBLoaded image: goharbor/clair-adapter-photon:v1.10.2[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ...prepare base dir is set to /usr/local/src/harborGenerated configuration file: /config/log/logrotate.confGenerated configuration file: /config/log/rsyslog_docker.confGenerated configuration file: /config/nginx/nginx.confGenerated configuration file: /config/core/envGenerated configuration file: /config/core/app.confGenerated configuration file: /config/registry/config.ymlGenerated configuration file: /config/registryctl/envGenerated configuration file: /config/db/envGenerated configuration file: /config/jobservice/envGenerated configuration file: /config/jobservice/config.ymlGenerated and saved secret to file: /secret/keys/secretkeyGenerated certificate, key file: /secret/core/private_key.pem, cert file: /secret/registry/root.crtGenerated configuration file: /compose_location/docker-compose.ymlClean up the input dir​[Step 5]: starting Harbor ...Creating network "harbor_harbor" with the default driverCreating harbor-log ... doneCreating harbor-portal ... doneCreating registry      ... doneCreating harbor-db     ... doneCreating registryctl   ... doneCreating redis         ... doneCreating harbor-core   ... doneCreating nginx             ... doneCreating harbor-jobservice ... done✔ ----Harbor has been installed and started successfully.----​

服务启动完成自动创建nginx和db等容器服务

$ docker-compose ps         [root@k8s-habor harbor]# docker-compose ps           Name                     Command                  State                          Ports                   ---------------------------------------------------------------------------------------------------------------harbor-core         /harbor/harbor_core              Up (healthy)                                              harbor-db           /docker-entrypoint.sh            Up (healthy)   5432/tcp                                   harbor-jobservice   /harbor/harbor_jobservice  ...   Up (healthy)                                              harbor-log          /bin/sh -c /usr/local/bin/ ...   Up (healthy)   127.0.0.1:1514->10514/tcp                  harbor-portal       nginx -g daemon off;             Up (healthy)   8080/tcp                                   nginx               nginx -g daemon off;             Up (healthy)   0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcpredis               redis-server /etc/redis.conf     Up (healthy)   6379/tcp                                   registry            /home/harbor/entrypoint.sh       Up (healthy)   5000/tcp                                   registryctl         /home/harbor/start.sh            Up (healthy)             

登陆界面

image-20200503131913874

用户名:admin 密码:Harbor12345

k8s-node01登陆harbor

[root@k8s-node01 ~]# docker login https://hub.51geeks.com Username: adminPassword: WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded​[root@k8s-master ~]# docker imagesREPOSITORY                           TAG                 IMAGE ID            CREATED             SIZEnginx                                latest              602e111c06b6        9 days ago          127MBhttpd                                latest              b2c2ab6dcf2e        10 days ago         166MB​删除镜像:docker rmi -f hub.51geeks.com/library/nginx:latest​推送镜像的Docker命令docker tag nginx hub.51geeks.com/library/nginxdocker push hub.51geeks.com/library/nginxdocker tag kubernetesui/dashboard:v2.0.0  hub.51geeks.com/library/kubernetesui/dashboard:v2.0.0docker push hub.51geeks.com/library/kubernetesui/dashboard:v2.0.0在项目中标记镜像:docker tag SOURCE_IMAGE[:TAG] hub.51geeks.com/library/IMAGE[:TAG]​推送镜像到当前项目:docker push hub.51geeks.com/library/IMAGE[:TAG]​K8S与集群的使用:kubectl  run --helpkubectl apply -f nginx-deployment.yaml查看 Pods kubectl get podskubectl get deploymentkubectl get  rskubectl get nodes 检查本地的环境信息​端口映射,向外部暴露服务,在Kubernetes中Pod有其自己的生命周期,Node发生故障时,ReplicationController或者ReplicationSet会将Pod迁移到其他节点中以保持用户希望的状态。#kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer查看服务状态(查看service被映射到哪个端口):kubectl get services[root@k8s-master ~]# kubectl get serviceNAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGEkubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        19hnginx-deployment   LoadBalancer   10.102.49.213   <pending>     80:32682/TCP   89m​创建部署之后,可以看到容器已经运行了,但是默认情况下,容器只能内部互相访问,如果需要对外提供服务,有以下几种方式:​ClusterIP,默认的方式,通过集群IP来对外提供服务,这种方式只能在集群内部访问。NodePort,利用NAT技术在Node的指定端口上提供对外服务。外部应用通过:的方式访问。LoadBalancer,利用外部的负载均衡设施进行服务的访问。ExternalName,这是1.7版本之后 kube-dns 提供的功能。​[root@k8s-master ~]# kubectl get  pod  -o wideNAME                                READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATESnginx-deployment-7789b77975-fcbmg   1/1     Running   0          2m31s   10.244.2.6   k8s-node02   <none>           <none>nginx-deployment-7789b77975-m85sx   1/1     Running   0          2m31s   10.244.2.7   k8s-node02   <none>           <none>​[root@k8s-node02 ~]#  docker ps -a |grep nginx​删除pod:kubectl delete pod nginx-deployment-7789b77975-fcbmg[root@k8s-master ~]# kubectl get  svcNAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGEkubernetes         ClusterIP      10.96.0.1       <none>        443/TCP        17hnginx-deployment   LoadBalancer   10.102.49.213   <pending>     80:32682/TCP   10m​ipvsadm  -Ln​

K8S的使用:

K8S 1.18版本创建pod都是基于YAML文件创建部署

vim tomcat.yamlapiVersion: v1kind: Podmetadata:                           #元数据信息  name: tomcat-c                    #kubectl get  pods 和 登陆容器显示的名字  labels:                           #标签,可以作为查询条件 kubectl get pods -l    app=tomcat    node=devops-103spec:                #规格  containers:                       #容器  - name: tomcat                    #容器名称    image: docker.io/tomcat         #使用的镜像    ports:      - containerPort: 8080    env:              #设置env,登陆到容器中查看环境变量, DEME_GREETING 的值是 "hello from the enviroment"    - name:GREETING      value: "hello from the environment"创建pod:kubectl create -f tomcat.yaml kubectl get podskubectl get nodeskubectl scale deployments/tomcat --replicas=3kubectl get deploymentskubectl get podskubectl describe pod tomcat-858b8c476d-cfrttkubectl scale deployments/tomcat --replicas=2kubectl describe deploymentkubectl get pods -l app=tomcatkubectl get services -l app=tomcatkubectl label --overwrite  pod tomcat-858b8c476d-vnm98 node=devops-102# 这里用了--overwrite属性是因为之前标错了kubectl describe pods tomcat-858b8c476d-vnm98
[root@k8s-master ~]# kubectl describe pods nginxName:         nginx-deployment-7789b77975-m85sxNamespace:    defaultPriority:     0Node:         k8s-node02/192.168.253.169Start Time:   Sun, 03 May 2020 14:22:25 +0800Labels:       app=nginx            #创建部署的时候,kubectl会自动帮我们打一个标签,这里是app=nginx。              pod-template-hash=7789b77975Annotations:  <none>Status:       RunningIP:           10.244.2.7IPs:  IP:           10.244.2.7Controlled By:  ReplicaSet/nginx-deployment-7789b77975Containers:  nginx:    Container ID:   docker://be642684912e5662a8bdd3b10e5e1be28936c045ea1df07421dac106b07cbef1    Image:          hub.51geeks.com/library/nginx:latest    Image ID:       docker-pullable://hub.51geeks.com/library/nginx@sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422    Port:           80/TCP    Host Port:      0/TCP    State:          Running      Started:      Sun, 03 May 2020 14:22:40 +0800    Ready:          True    Restart Count:  0    Environment:    <none>    Mounts:      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9jpf (ro)Conditions:  Type              Status  Initialized       True   Ready             True   ContainersReady   True   PodScheduled      True Volumes:  default-token-q9jpf:    Type:        Secret (a volume populated by a Secret)    SecretName:  default-token-q9jpf    Optional:    falseQoS Class:       BestEffortNode-Selectors:  <none>Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s                 node.kubernetes.io/unreachable:NoExecute for 300sEvents:          <none>​Name:         nginx-deployment-7789b77975-n5zpcNamespace:    defaultPriority:     0Node:         k8s-node01/192.168.253.168Start Time:   Sun, 03 May 2020 14:33:19 +0800Labels:       app=nginx              pod-template-hash=7789b77975Annotations:  <none>Status:       RunningIP:           10.244.1.2IPs:  IP:           10.244.1.2Controlled By:  ReplicaSet/nginx-deployment-7789b77975Containers:  nginx:    Container ID:   docker://ec776f43a83d474be5ebfb994532731509727344de57e814464f69b78dbeb30a    Image:          hub.51geeks.com/library/nginx:latest    Image ID:       docker-pullable://hub.51geeks.com/library/nginx@sha256:cccef6d6bdea671c394956e24b0d0c44cd82dbe83f543a47fdc790fadea48422    Port:           80/TCP    Host Port:      0/TCP    State:          Running      Started:      Sun, 03 May 2020 14:34:36 +0800    Ready:          True    Restart Count:  0    Environment:    <none>    Mounts:      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q9jpf (ro)Conditions:  Type              Status  Initialized       True   Ready             True   ContainersReady   True   PodScheduled      True Volumes:  default-token-q9jpf:    Type:        Secret (a volume populated by a Secret)    SecretName:  default-token-q9jpf    Optional:    falseQoS Class:       BestEffortNode-Selectors:  <none>Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s                 node.kubernetes.io/unreachable:NoExecute for 300sEvents:          <none>

安装部署dashboard

1.查看pod运行情况

kubectl get pods -A -o wide

image-20200503165000693
下载recommended.yaml文件wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml修改recommended.yaml文件vim recommended.yamlkind: ServiceapiVersion: v1metadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kubernetes-dashboardspec:  type: NodePort #增加  ports:    - port: 443      targetPort: 8443      nodePort: 30000 #增加  selector:    k8s-app: kubernetes-dashboard​#因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明修改镜像地址#apiVersion: v1#kind: Secret#metadata:#  labels:#    k8s-app: kubernetes-dashboard#  name: kubernetes-dashboard-certs#  namespace: kubernetes-dashboard#type: Opaque---​kubectl apply -f recommended.yaml

创建证书

mkdir dashboard-certscd dashboard-certs/​#创建命名空间kubectl create namespace kubernetes-dashboard​# 创建key文件openssl genrsa -out dashboard.key 2048​#证书请求openssl req  -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'​#自签证书openssl x509 -req  -days 36000  -in dashboard.csr -signkey dashboard.key -out dashboard.crt​#创建kubernetes-dashboard-certs对象kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

删除命名空间:

[root@k8s-master dashboard-certs]# kubectl get ns | grep kubernetes kubernetes-dashboard Active 10h [root@k8s-master dashboard-certs]# kubectl delete ns kubernetes-dashboard namespace “kubernetes-dashboard” deleted [root@k8s-master dashboard-certs]# kubectl get ns | grep kubernetes [root@k8s-master dashboard-certs]# kubectl get pods -A -o wide

5.安装dashboard

kubectl create -f ~/recommended.yaml [root@k8s-master dashboard-certs]# kubectl create -f ~/recommended.yaml serviceaccount/kubernetes-dashboard createdservice/kubernetes-dashboard createdsecret/kubernetes-dashboard-csrf createdsecret/kubernetes-dashboard-key-holder createdconfigmap/kubernetes-dashboard-settings createdrole.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrole.rbac.authorization.k8s.io/kubernetes-dashboard createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createdclusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard createdservice/dashboard-metrics-scraper createddeployment.apps/dashboard-metrics-scraper createdError from server (AlreadyExists): error when creating "/root/recommended.yaml": namespaces "kubernetes-dashboard" already existsError from server (AlreadyExists): error when creating "/root/recommended.yaml": secrets "kubernetes-dashboard-certs" already exists注意:这里可能会报如下所示。Error from server (AlreadyExists): error when creating "./recommended.yaml": namespaces "kubernetes-dashboard" already exists​这是因为我们在创建证书时,已经创建了kubernetes-dashboard命名空间,所以,直接忽略此错误信息即可。

6.查看安装结果

​[root@k8s-master dashboard-certs]# kubectl get pods -A  -o wideNAMESPACE              NAME                                         READY   STATUS              RESTARTS   AGE    IP                NODE         NOMINATED NODE   READINESS GATESdefault                nginx-deployment-7789b77975-m85sx            1/1     Running             0          166m   10.244.2.7        k8s-node02   <none>           <none>default                nginx-deployment-7789b77975-n5zpc            1/1     Running             0          155m   10.244.1.2        k8s-node01   <none>           <none>kube-system            coredns-66bff467f8-4pzqn                     1/1     Running             2          20h    10.244.0.7        k8s-master   <none>           <none>kube-system            coredns-66bff467f8-bw2b4                     1/1     Running             2          20h    10.244.0.6        k8s-master   <none>           <none>kube-system            etcd-k8s-master                              1/1     Running             3          20h    192.168.253.167   k8s-master   <none>           <none>kube-system            kube-apiserver-k8s-master                    1/1     Running             2          20h    192.168.253.167   k8s-master   <none>           <none>kube-system            kube-controller-manager-k8s-master           1/1     Running             2          20h    192.168.253.167   k8s-master   <none>           <none>kube-system            kube-flannel-ds-amd64-k92bk                  1/1     Running             9          19h    192.168.253.169   k8s-node02   <none>           <none>kube-system            kube-flannel-ds-amd64-kf7j7                  1/1     Running             2          20h    192.168.253.167   k8s-master   <none>           <none>kube-system            kube-flannel-ds-amd64-kmg7d                  1/1     Running             1          19h    192.168.253.168   k8s-node01   <none>           <none>kube-system            kube-proxy-4hjbl                             1/1     Running             1          19h    192.168.253.168   k8s-node01   <none>           <none>kube-system            kube-proxy-g2hlg                             1/1     Running             2          20h    192.168.253.167   k8s-master   <none>           <none>kube-system            kube-proxy-kfvgx                             1/1     Running             1          19h    192.168.253.169   k8s-node02   <none>           <none>kube-system            kube-scheduler-k8s-master                    1/1     Running             2          20h    192.168.253.167   k8s-master   <none>           <none>kubernetes-dashboard   dashboard-metrics-scraper-6b4884c9d5-mj8qr   0/1     ContainerCreating   0          84s    <none>            k8s-node02   <none>           <none>kubernetes-dashboard   kubernetes-dashboard-7b544877d5-72spd        0/1     ContainerCreating   0          87s    <none>            k8s-node02   <none>           <none>[root@k8s-master dashboard-certs]# kubectl get service -n kubernetes-dashboard  -o wideNAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE    SELECTORdashboard-metrics-scraper   NodePort    10.111.221.30    <none>        8000:30000/TCP   2m1s   k8s-app=dashboard-metrics-scraperkubernetes-dashboard        ClusterIP   10.105.134.250   <none>        443/TCP          2m7s   k8s-app=kubernetes-dashboard​

7.创建dashboard管理员

创建dashboard-admin.yaml文件。​vim dashboard-admin.yaml​apiVersion: v1kind: ServiceAccountmetadata:  labels:    k8s-app: kubernetes-dashboard  name: dashboard-admin  namespace: kubernetes-dashboard​保存退出后执行如下命令创建管理员。​kubectl create -f ./dashboard-admin.yaml

8.为用户分配权限

创建dashboard-admin-bind-cluster-role.yaml文件。​vim dashboard-admin-bind-cluster-role.yaml​apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: dashboard-admin-bind-cluster-role  labels:    k8s-app: kubernetes-dashboardroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: dashboard-admin  namespace: kubernetes-dashboard​kubectl create -f ./dashboard-admin-bind-cluster-role.yaml

9.查看并复制用户Token

在命令行执行如下命令。​kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')​具体执行情况如下所示。[root@k8s-master dashboard-certs]# kubectl create -f ./dashboard-admin-bind-cluster-role.yamlclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin-bind-cluster-role created​[root@k8s-master dashboard-certs]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')Name:         dashboard-admin-token-dpbz5Namespace:    kubernetes-dashboardLabels:       <none>Annotations:  kubernetes.io/service-account.name: dashboard-admin              kubernetes.io/service-account.uid: 00d333aa-e3fd-455d-b81d-20d7c9de136e​Type:  kubernetes.io/service-account-token​Data====token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Inh3RGxrbUZCakp3RHBvSkg0QkVaVnRZNEgxalFSdlZsQXZXUEh6bHlUR3cifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZHBiejUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMDBkMzMzYWEtZTNmZC00NTVkLWI4MWQtMjBkN2M5ZGUxMzZlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.aJYmXtzuLkr1SvcoxCfisPBDFbWium6qVHL6qsLkqdRS7WYr54cQm20H6Vr1wLr-1QE3g0fENYpLv7SVVEmvXxy5MemWSmJ-gzDGhWnwvMkCnuX3kFOuJL7VLzwwNf31MuO458y8os34BKnfYc3N8Sk4SuyzRhYy3rCJ9lIGa46-bGsSMTtbzWxp1uwOvaec3cG2gmoJjzh7nInNqpNg-G3sD6q8kfiWVUeS1utfjvpw_yECSxL9yz86XgZxopdn7iCODeTYZGzQfy1qKEaHUgwuO0jLgreTPNdsq1BPh6ld2W0b2KloFOqqMJAc5BmX5npCj3fNDOS_QgoWzy4WDAca.crt:     1025 bytesnamespace:  20 bytes​[root@k8s-master ~]# kubectl get services --all-namespacesNAMESPACE              NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGEdefault                kubernetes                  ClusterIP      10.96.0.1        <none>        443/TCP                  21hdefault                nginx-deployment            LoadBalancer   10.102.49.213    <pending>     80:32682/TCP             3h55mkube-system            kube-dns                    ClusterIP      10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   21hkubernetes-dashboard   dashboard-metrics-scraper   NodePort       10.111.221.30    <none>        8000:30000/TCP           73mkubernetes-dashboard   kubernetes-dashboard        ClusterIP      10.105.134.250   <none>        443/TCP                  74m​​

查看dashboard界面 在浏览器中打开链接 https://192.168.253.167:30000 ,如下所示。

这里,我们选择Token方式登录,并输入在命令行获取到的Token,如下所示。

点击登录后进入dashboard,如下所示。

image-20200503222024644

至此,dashboard 2.0.0安装成功。

https://www.processon.com/view/link/5ac64532e4b00dc8a02f05eb?spm=a2c4e.10696291.0.0.6ec019a4bYSFIw#map
删除找不到的镜像的pod:[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboardNAME                                         READY   STATUS             RESTARTS   AGEdashboard-metrics-scraper-755f66f567-vbvzc   1/1     Running            0          119mkubernetes-dashboard-77f89d4675-48zp7        1/1     Running            0          92mkubernetes-dashboard-77f89d4675-6r87x        0/1     CrashLoopBackOff   23         109mkubernetes-dashboard-77f89d4675-xv6bq        0/1     CrashLoopBackOff   20         119m[root@k8s-master ~]# kubeclt delete pod kubernetes-dashboard-77f89d4675-6r87x -bash: kubeclt: command not found[root@k8s-master ~]# kubectl  delete pod kubernetes-dashboard-77f89d4675-6r87x Error from server (NotFound): pods "kubernetes-dashboard-77f89d4675-6r87x" not found[root@k8s-master ~]# kubectl delete pod  kubernetes-dashboard-77f89d4675-6r87x -n kubernetes-dashboardpod "kubernetes-dashboard-77f89d4675-6r87x" deleted[root@k8s-master ~]# kubectl delete pod  kubernetes-dashboard-77f89d4675-xv6bq  -n kubernetes-dashboardpod "kubernetes-dashboard-77f89d4675-xv6bq" deleted[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboardNAME                                         READY   STATUS    RESTARTS   AGEdashboard-metrics-scraper-755f66f567-vbvzc   1/1     Running   0          124mkubernetes-dashboard-77f89d4675-48zp7        1/1     Running   0          98mkubernetes-dashboard-77f89d4675-s55ct        1/1     Running   1          70skubernetes-dashboard-77f89d4675-wbbr9        1/1     Running   0          32sk8s启动Pod遇到CrashLoopBackOff的解决方法一直正常运的k8s,集群节点没问题,但启动pod出现异常等待中: CrashLoopBackOff1.登陆此节点主机使用kubctl获取pod状态kubectl get pod查询异常pod名称为:elkhost-944bcbcd4-8n9nj2.查看此状态pod详细情况kubectl describe pod elkhost-944bcbcd4-8n9nj3.查看此pod日志 kubectl logs elkhost-944bcbcd4-8n9nj kubectl get - 列出可用资源​列出所有的pod  : kubectl get pods --all-namespaces列出所有的job :  kubectl get job --all-namespaceskubectl describe - 显示有关资源的详细信息​kubectl describe pod nvjob-lnrxj -n default-n default 是指定namespace为default里的pod,是语法中的flagkubectl logs - 从 Pod 中的容器打印日志​这个特殊一点哦,这个不用指定TYPE,因为kubeclt logs 默认就是pod类型,所以 kubectl  logs  pod 会报错,"Error from server (NotFound): pods "pod" not found"kubectl logs calijob  -n calibkubectl exec - 在 Pod 中的容器执行命令​和log一样,不用指明type,默认是pod : kubectl exec <pod_name>  -n <namespace> datekubectl delete - 在 Pod 中的容器执行命令​kubectl delete pod cali-2 -n calib批量删除namespace 是calib中 状态为Error的所有pod:kubectl get pods -n calib | grep Error | awk '{print $1}' | xargs kubectl delete pod -n calib(注意 “Error”,“Completed”状态得首字母都是大写哦)​查看kubernetes-dashboard部署日志[root@k8s-master ~]# kubectl logs -f kubernetes-dashboard-77f89d4675-s55ct --namespace=kubernetes-dashboard2020/05/03 14:38:11 Starting overwatch2020/05/03 14:38:11 Using namespace: kubernetes-dashboard2020/05/03 14:38:11 Using in-cluster config to connect to apiserver2020/05/03 14:38:11 Using secret token for csrf signing2020/05/03 14:38:11 Initializing csrf token from kubernetes-dashboard-csrf secretpanic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout​goroutine 1 [running]:github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0004d4420)    /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)    /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:66github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000484080)    /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:501 +0xc6github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000484080)    /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:469 +0x47github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)    /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:550main.main()    /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x20d​查看kubernetes-dashboard部署状态[root@k8s-master ~]# kubectl get pod --all-namespacesNAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGEdefault                nginx-deployment-7789b77975-m85sx            1/1     Running            0          16hdefault                nginx-deployment-7789b77975-n5zpc            1/1     Running            0          16hkube-system            coredns-66bff467f8-4pzqn                     1/1     Running            3          34hkube-system            coredns-66bff467f8-bw2b4                     1/1     Running            3          34hkube-system            etcd-k8s-master                              1/1     Running            4          34hkube-system            kube-apiserver-k8s-master                    1/1     Running            3          34hkube-system            kube-controller-manager-k8s-master           1/1     Running            7          34hkube-system            kube-flannel-ds-amd64-k92bk                  1/1     Running            9          33hkube-system            kube-flannel-ds-amd64-kf7j7                  1/1     Running            3          33hkube-system            kube-flannel-ds-amd64-kmg7d                  1/1     Running            1          33hkube-system            kube-proxy-4hjbl                             1/1     Running            1          33hkube-system            kube-proxy-g2hlg                             1/1     Running            3          34hkube-system            kube-proxy-kfvgx                             1/1     Running            1          33hkube-system            kube-scheduler-k8s-master                    1/1     Running            7          34hkubernetes-dashboard   dashboard-metrics-scraper-755f66f567-vbvzc   1/1     Running            0          153mkubernetes-dashboard   kubernetes-dashboard-77f89d4675-48zp7        1/1     Running            0          126mkubernetes-dashboard   kubernetes-dashboard-77f89d4675-s55ct        0/1     CrashLoopBackOff   9          29mkubernetes-dashboard   kubernetes-dashboard-77f89d4675-wbbr9        0/1     CrashLoopBackOff   9          29m删除kube-dashboard部署配置kubectl delete -f kube-dashboard.yaml部署kubernetes-dashboardkubectl create -f kube-dashboard.yaml​###查看kubernetes状态```kubectl get pods -A #查看相关状态kubectl get cs #查看k8s的ready状态kubectl get node #查看k8s节点状态kubectl -n kube-system get service kubernetes-dashboard #获取端口信息```​###运维事项```kubectl get pods --all-namespaces 获取状态journalctl -f -u kubelet.service 查看 kubernetes 日志kubectl apply -f 部署某个组件kubectl delete 删除某个组件kubectl get node 获取所有 节点状态kubectl describe pod kubernetes-dashboard-7d75c474bb-zvc85 -n kube-system查看 pod 的状态的原因kubectl logs -f kubernetes-dashboard-7d75c474bb-zvc85 -n kube-system 查看容器日志```​###初始化```kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.1.239#添加k8s节点kubeadm join 192.168.1.239:6443 --token mcpg7g.hosgnl6ljwconxxe \    --discovery-token-ca-cert-hash sha256:864cf0a1b8ee307a557f780ef30856278898dbe36259575699134d5389d9e935​#安装yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo​#获取pods状态kubectl get pods coredns-5c98db65d4-6hbhd -n kube-system -o yamlkubectl get pods coredns-5c98db65d4-6hbhd --namespace=kube-system -o yaml | grep resources​###初始化 获取pods状态 查看日志 按照yaml删除k8s节点sudo kubeadm init --kubernetes-version=v1.15.0 --apiserver-advertise-address=192.168.1.239 --pod-network-cidr=192.168.0.0/16kubectl get pods --all-namespaceskubectl get node journalctl -f -u kubelet.service 日志kubectl delete  rbac-kdd.yaml​​kubectl delete  calico.yamlkubectl delete pod calico-node-zplqs -n kube-systemkubectl describe pod kubernetes-dashboard-7d75c474bb-zvc85  -n kube-systemkubectl logs -f kube-apiserver-server -n kube-system​​kubectl delete -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"kubectl delete -f calico.yamlkubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yamlkubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

安装helm3.2.0

一、helm简介

我们可以将Helm看作Kubernetes下的Maven/NPM. Python下的pip,Linux上的yum, Helm是Deis(https://deis.com/) 开发的一个用于kubernetes的包管理器,每个包称为一个Chart,一个Chart是一个目录(一般情况下会将目录进行打包压缩,形成name version.tgz格式的单一文件,方便传输和存储) 对于应用发布者而言,可以通过Helm打包应用, 管理应用依赖关系,管理应用版本并发布应用到软件仓库。 对于使用者而言,使用Helm后不用需要了解Kubernetes的Yaml语法并编写应用部署文件,可以通过Helm下载并在kubernetes上安装需要的应用, 除此以外,Helm还提供了kubernetes上的软件部署,删除,升级, 回滚应用的强大功能**

Kubernetes Helm 是一个管理预先配置 Kubernetes 资源包的工具,这里的资源在 Helm 中也被称作 Kubernetes charts。

使用 Helm:

查找并使用已经打包为 Kubernetes charts 的流行软件 分享您自己的应用作为 Kubernetes charts 为 Kubernetes 应用创建可重复执行的构建 为您的 Kubernetes 清单文件提供更智能化的管理 管理 Helm 软件包的发布

  • helm v2 版本 包含两个组件,分别是 helm 客户端 和 Tiller 服务器, helm 是一个命令行工具,用于本地开发及管理chart,chart仓库管理等 Tiller 负责接收 Helm 的请求,与 k8s 的 apiserver 交互
  • helm v3 版本 移除了Tiller helm直接和K8s交互 SA通过 kuberconfig 配置认证
    • 设计原理它是一个线程的方式运行
      1. Helm-cotroller 运行在master节点并list/walch HelmChart CRD对象
      2. CRD onChange时执行Job更新
      3. Job Container使用rancherklilpper-helm为entrypoint
      4. Killper-helm内 thelm cli,可以安装/升级/删除对应的chart环境前提: 已经安装k8s 熟练使用kubectl及yaml配置文件

二、安装Helm3

# 去Git下载 Helm 二进制文件(Git地址

设置环境变量KUBECONFIG来指定存有ApiServre的地址与token的配置文件地址,默认为~/.kube/config

export KUBECONFIG=/root/.kube/config

# wget https://get.helm.sh/helm-v3.2.0-linux-amd64.tar.gz

解压缩,并将可执行文件helm移动到/usr/local/bin/目录下

第一种:tar - zxvf helm-v3.2.0-linux-amd64.tar.gzmv linux-amd64/helm /usr/local/bin/​第二种安装方法:#curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3#chmod 700 get_helm.sh#./get_helm.sh
查看helm[root@k8s-master bin]# helm --help[root@k8s-master tomcat]# helm versionversion.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}

Helm3配置

# 配置helm3仓库 helm repo add repo_name1 https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/ # 配置helm3仓库(忽略:此处就想试试能否加多个仓库) helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts # 更新 helm repo update helm repo list

[root@k8s-master tomcat]# helm repo listNAME        URL                                                                      aliyuncs    https://apphub.aliyuncs.com                                              repo_name1  https://aliacs-app-catalog.oss-cn-hangzhou.aliyuncs.com/charts-incubator/stable      https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts                   [root@k8s-master tomcat]# helm repo updateHang tight while we grab the latest from your chart repositories......Successfully got an update from the "repo_name1" chart repository...Successfully got an update from the "stable" chart repository...Successfully got an update from the "aliyuncs" chart repositoryUpdate Complete. ⎈ Happy Helming!⎈ 

Helm3 使用

# 生成chart文件 此处会生成一个 nginx的目录 如下: helm create nginx tree nginx/ . nginx/ ├── charts #依赖其他包的charts文件 ├── Chart.yaml # 该chart的描述文件,包括ico地址,版本信息等 ├── templates #存放k8s模板文件目录 │ ├── deployment.yaml #创建k8s资源的yaml 模板 │ ├── _helpers.tpl #下划线开头的文件,可以被其他模板引用. │ ├── hpa.yaml # 配置服务资源CPU 内存 │ ├── ingress.yaml # ingress 配合service域名访问的配置 │ ├── NOTES.txt #说明文件,helm install之后展示给用户看的内容 │ ├── service.yaml #kubernetes Serivce yaml 模板 └── values.yaml #给模板文件使用的变量


# 修改values.yam 里的service的type为 NodePort # 安装chart任务 (注意后面有个点) helm install -f values.yaml nginx1 .

# 查询release helm ls helm list # 删除release helm delete nginx1

#helm install -f values.yaml nginx1 .

[root@k8s-master tomcat]# helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION tomcat1 default 1 2020-05-04 14:29:04.274694143 +0800 CST deployed tomcat-0.1.0 1.16.0

验证服务

通过K8S命令查看下 服务是否成功启动 [root@k8s-master tomcat]# kubectl get all NAME READY STATUS RESTARTS AGE pod/nginx-deployment-7789b77975-m85sx 1/1 Running 0 24h pod/nginx-deployment-7789b77975-n5zpc 1/1 Running 0 24h pod/tomcat1-6b6765ffc-pb9ld 1/1 Running 0 11m pod/tomcat1-6b6765ffc-wcc69 1/1 Running 0 11m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 42h service/nginx-deployment LoadBalancer 10.102.49.213 <pending> 80:32682/TCP 24h service/tomcat1 NodePort 10.100.13.128 <none> 80:31959/TCP 11m

NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/nginx-deployment 2/2 2 2 24h deployment.apps/tomcat1 2/2 2 2 11m

NAME DESIRED CURRENT READY AGE replicaset.apps/nginx-deployment-7789b77975 2 2 2 24h replicaset.apps/tomcat1-6b6765ffc 2 2 2 11m

![image-20200504164935462](C:\Users\pxmwnjx\AppData\Roaming\Typora\typora-user-images\image-20200504164935462.png)

转载请注明:Linux系统自动化运维 » Centos8安装kubernetes集群及仓库Harbor安装配置,Helm安装

喜欢 (0)
发表我的评论
取消评论

表情

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址