第1章 k8s系统架构
从系统架构来看,k8s分为2个节点
Master 控制节点 指挥官
Node 工作节点 干活的
1.Master节点组成
API Server :提供k8s API接口
主要处理Rest操作以及更新Etcd中的对象
是所有资源增删改查的唯一入口。
Scheduler:资源调度器
根据etcd里的节点资源状态决定将Pod绑定到哪个Node上
Controller Manager
负责保障pod的健康存在
资源对象的自动化控制中心,Kubernetes集群有很多控制器。
Etcd
这个是Kubernetes集群的数据库
所有持久化的状态信息存储在Etcd中
2.Node节点的组成
Docker Engine
负责节点容器的管理工作,最终创建出来的是一个Docker容器。
kubelet
安装在Node上的代理服务,用来管理Pods以及容器/镜像/Volume等,实现对集群对节点的管理。
kube-proxy
安装在Node上的网络代理服务,提供网络代理以及负载均衡,实现与Service通讯。
第2章 k8s逻辑架构
从逻辑架构上看,k8s分为
Pod
Controller
Service
1.POD
POD是k8s的最小单位
POD的IP地址是随机的,删除POD会改变IP
POD都有一个根容器
一个POD内可以由一个或多个容器组成
一个POD内的容器共享根容器的网络命名空间
一个POD的内的网络地址由根容器提供
2.Controller
用来管理POD,控制器的种类有很多
- RC Replication Controller 控制POD有多个副本
- RS ReplicaSet RC控制的升级版
- Deployment 推荐使用,功能更强大,包含了RS控制器
- DaemonSet 保证所有的Node上有且只有一个Pod在运行
- StatefulSet 有状态的应用,为Pod提供唯一的标识,它可以保证部署和scale的顺序
3.Service
NodeIP 对外提供用户访问
CluterIP 集群内部IP,可以动态感知后面的POD IP
POD IP POD的IP
第3章 k8s实验环境准备
1.配置信息
主机名 IP地址 推荐配置 勉强配置
node1 10.0.0.11 1C4G40G 1C2G
node2 10.0.0.12 1C2G40G 1C1G
node3 10.0.0.13 1C2G40G 1C1G
2.初始化操作
干净环境
配置主机名
配置host解析
关闭防火墙
关闭SELinux
配置时间同步
更新好阿里源
确保网络通畅
关闭SWAP分区
第4章 安装指定版本的docker
1.配置阿里源
cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.下载指定版本的docker
yum -y install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7
3.配置docker镜像加速
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
4.启动
systemctl enable docker && systemctl start docker
5.检查版本
docker -v
第5章 部署kubeadm和kubelet
注意!所有机器都需要操作!!!
注意!所有机器都需要操作!!!
注意!所有机器都需要操作!!!
1.设置k8s国内yum仓库
cat >/etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.安装kubeadm
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 ipvsadm
3.设置k8s禁止使用swap
cat > /etc/sysconfig/kubelet<<EOF
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
4.设置内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
5.设置kubelet开机启动
systemctl enable kubelet && systemctl start kubelet
6.加载IPVS模块
cat >/etc/sysconfig/modules/ipvs.modules<<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
source /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv
第6章 初始化集群部署Master
0.安装规划
节点规划
node1 master节点 API Server,controlle,scheduler,kube-proxy,kubelet,etcd
node2 node节点 Dokcer kubelet kube-proxy
node3 node节点 Dokcer kubelet kube-proxy
IP规划
POD IP 10.2.0.0
Cluster IP 10.1.0.0
Node IP 10.0.0.0
1.初始化命令
注意!只在node1节点运行!!!
注意!只在node1节点运行!!!
注意!只在node1节点运行!!!
官网地址:
https://v1-16.docs.kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/
初始化命令:
kubeadm init \
--apiserver-advertise-address=10.0.0.11 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.16.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.2.0.0/16 \
--service-dns-domain=cluster.local \
--ignore-preflight-errors=Swap \
--ignore-preflight-errors=NumCPU
执行完成后会有输出,这是node节点加入k8s集群的命令
kubeadm join 10.0.0.11:6443 --token 2an0sn.kykpta54fw6uftgq \
--discovery-token-ca-cert-hash sha256:e7d36e1fb53e59b12f0193f4733edb465d924321bcfc055f801cf1ea59d90aae
2.为kubectl准备kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
3.获取node节点信息
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady master 15m v1.16.
4.支持命令补全
yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash >/etc/bash_completion.d/kubectl
5.设置kube-proxy使用ipvs模式
执行命令,然后将mode: ""修改为mode: "ipvs"然后保存退出
kubectl edit cm kube-proxy -n kube-system
重启kube-proxy
kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
查看pod信息
kubectl get -n kube-system pod|grep "kube-proxy"
检查日志,如果出现IPVS rr就表示成功
[root@node1 ~]# kubectl -n kube-system logs -f kube-proxy-7cdbn
I0225 08:03:57.736191 1 node.go:135] Successfully retrieved node IP: 10.0.0.11
I0225 08:03:57.736249 1 server_others.go:176] Using ipvs Proxier.
W0225 08:03:57.736841 1 proxier.go:420] IPVS scheduler not specified, use rr by default
检查IPVS规则
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.0.1:443 rr
-> 10.0.0.11:6443 Masq 1 0 0
TCP 10.1.0.10:53 rr
TCP 10.1.0.10:9153 rr
UDP 10.1.0.10:53 rr
第7章 部署网络插件
注意!只在node1节点上安装部署!!!
注意!只在node1节点上安装部署!!!
注意!只在node1节点上安装部署!!!
1.部署Flannel网络插件
git clone --depth 1 https://github.com/coreos/flannel.git
2.修改资源配置清单
cd flannel/Documentation/
vim kube-flannel.yml
egrep -n "10.2.0.0|mirror|eth0" kube-flannel.yml
128: "Network": "10.2.0.0/16",
172: image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
186: image: quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64
192: - --iface=eth0
3.应用资源配置清单
kubectl create -f kube-flannel.yml
4.检查pod运行状态,等一会应该全是running
[root@node1 ~]# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-bzlkw 1/1 Running 0 77m
coredns-58cc8c89f4-sgs44 1/1 Running 0 77m
etcd-node1 1/1 Running 0 76m
kube-apiserver-node1 1/1 Running 0 76m
kube-controller-manager-node1 1/1 Running 0 76m
kube-flannel-ds-amd64-cc5g6 1/1 Running 0 3m10s
kube-proxy-7cdbn 1/1 Running 0 23m
kube-scheduler-node1 1/1 Running 0 76m
第8章 部署Node节点
1.master节点输出增加节点的命令
kubeadm token create --print-join-command
2.node节点执行加入集群命令
kubeadm join 10.0.0.11:6443 --token uqf018.mia8v3i1zcai19sj --discovery-token-ca-cert-hash sha256:e7d36e1fb53e59b12f0193f4733edb465d924321bcfc055f801cf1ea59d90aae
3.在node1节点上查看状态
kubectl get nodes
4.给节点打标签
[root@node1 ~]# kubectl label nodes node2 node-role.kubernetes.io/node=
[root@node1 ~]# kubectl label nodes node3 node-role.kubernetes.io/node=
5.再次查看节点状态
[root@node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 171m v1.16.2
node2 Ready node 27m v1.16.2
node3 Ready node 27m v1.16.2
第9章 常用资源类型
1.工作负载类型
RC ReplicaController
RS ReplicaSet
DP Deployment
DS DaemonSet
2.服务发现及负载均衡
Service
Ingress
3.配置与存储资源
ConfigMap 存储配置文件
Secret 存储用户字典
4.集群级别资源
Namespace
Node
Role
ClusterRole
RoleBinding
ClusterRoleBinding
第10章 资源配置清单
1.创建资源的方法
apiserver仅能接受json格式的资源定义
yaml格式提供的清单,apiserver可以自动将其转换为json格式再提交
2.资源清单介绍
查看资源清单所需字段
kubectl explain pod
kubectl explain pod
kubectl explain pod.spec
kubectl explain pod.spec.volumes
资源清单字段介绍
apiVersion: v1 #属于k8s哪一个API版本或组
kind: Pod #资源类型
metadata: #元数据,嵌套字段
spec: #定义容器的规范,创建的容器应该有哪些特性
status: #只读的,由系统控制,显示当前容器的状态
3.使用资源配置清单创建POD
3.1首先使用命令行创建一个pod
kubectl create deployment nginx --image=nginx:alpine
kubectl get pod -o wide
3.2 将刚才创建的pod配置到处成yaml格式
kubectl get pod -o yaml > nginx-pod.yaml
3.3 精简资源清单,删掉不需要的配置
cat nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
json格式写法:
{
apiVersion: "v1",
kind: "Pod",
metadata:
{
name: "nginx",
labels:
{
app: "nginx"
}
}
spec:
{
containers:
{
name: "nginx",
image: "nginx:alpine",
imagePullPolicy: "IfNotPresent"
}
}
}
3.5 删除命令行创建的资源
kubectl delete deployments.apps nginx
3.6 应用资源配置清单
kubectl create -f nginx-pod.yaml
3.7 查看pod信息
kubectl get pod -o wide
3.8 查看pod详细信息
kubectl describe pod nginx
4.POD资源清单总结
声明式管理 我想运行一个Nginx k8s帮你干活
apiVersion: v1 #api版本
kind: Pod #资源类型
metadata: #元数据
name: nginx #元数据名称
labels: #pod标签
app: nginx
spec: #容器定义
containers: #容器的特性
- name: nginx #容器名称
image: nginx:alpine #容器的镜像名称
imagePullPolicy: IfNotPresent #容器的拉取策略
ports: #容器端口
- name: http
containerPort: 80 #容器暴露的端口
第11章 Node节点标签设置
1.查看node的标签
kubectl get node --show-labels
2.给node打标签
kubectl label nodes node2 CPU=Xeon
kubectl label nodes node3 disktype=ssd
3.编辑POD资源配置清单,使用node标签选择器
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
nodeSelector:
#CPU: Xeon
disktype: SSD
4.删除容器重新创建
kubectl delete pod nginx
kubectl create -f nginx-pod.yaml
5.查看结果
kubectl get pod -o wide
6.删除节点标签
kubectl label nodes node2 CPU-
kubectl label nodes node3 disktype-
第12章 容器打标签
1.标签说明
一个标签可以给多个POD使用
一个POD也可以拥有多个标签
2.查看POD标签
kubectl get pod --show-labels
3.添加标签方法
3.1 方法1:直接编辑资源配置清单:
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
release: beta
3.2 方法2:命令行打标签
kubectl label pods nginx release=beta
kubectl label pods nginx job=linux
kubectl get pod --show-labels
4.删除标签
kubectl label pod nginx job-
kubectl get pod --show-labels
5.POD标签实验
5.1 生成2个不同标签的POD
kubectl create deployment nginx --image=nginx:1.14.0
kubectl get pod --show-labels
kubectl label pods nginx-xxxxxxxx release=stable
kubectl get pod --show-labels
5.2 根据标签查看
kubectl get pods -l release=beta --show-labels
kubectl get pods -l release=stable --show-labels
5.3 根据标签删除
kubectl delete pod -l app=nginx
第13章 运行一个demo
1.编写资源配置清单
mysql-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
mysql-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
tomcat-dp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myweb
spec:
replicas: 2
selector:
matchLabels:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: 'mysql'
- name: MYSQL_SERVICE_PORT
value: '3306'
tomcat-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
2.应用资源配置清单
kubectl create -f ./
3.查看创建的资源
kubectl get pod -o wide
kubectl get svc
4.浏览访问
第14章 使用harbor作为私有仓库
1.清理以前安装的Harbor
docker ps -a|grep "goharbor"|awk '{print "docker stop "$1}'
docker ps -a|grep "goharbor"|awk '{print "docker rm "$1}'
docker images|grep "goharbor"|awk '{print "docker rmi "$1":"$2}'
2.解压并修改harbor配置文件
cd /opt/
tar zxf harbor-offline-installer-v1.9.0-rc1.tgz
cd harbor/
vim harbor.yml
hostname: 10.0.0.11
port: 8888
harbor_admin_password: 123456
data_volume: /data/harbor
3.执行安装并访问
./install.sh
浏览器访问:
http://10.0.0.11:8888
4.创建一个私有仓库k8s
web页面操作
5.配置docker信任仓库并重启
注意!!!三台服务器都操作!!!
cat >/etc/docker/daemon.json<<EOF
{
"registry-mirrors": ["https://ig2l319y.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"insecure-registries" : ["http://10.0.0.11:8888"]
}
EOF
systemctl restart docker
注意!!!node1重启docker后harbor会失效,需要重启harbor
cd /opt/harbor
docker-compose stop
docker-compose start
6.docker登陆harbor
docker login 10.0.0.11:8888
7.将docker登陆凭证转化为k8s能识别的base64编码
只要一台节点操作即可
[root@node1 ~]# cat /root/.docker/config.json|base64
ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTE6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZN
VEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tl
ci1DbGllbnQvMTguMDkuOSAobGludXgpIgoJfQp9
8.编写Secert资源配置清单
[root@node1 ~/demo]# cat harbor-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: harbor-secret
data:
.dockerconfigjson: ewoJImF1dGhzIjogewoJCSIxMC4wLjAuMTE6ODg4OCI6IHsKCQkJImF1dGgiOiAiWVdSdGFXNDZNVEl6TkRVMiIKCQl9Cgl9LAoJIkh0dHBIZWFkZXJzIjogewoJCSJVc2VyLUFnZW50IjogIkRvY2tlci1DbGllbnQvMTguMDkuOSAobGludXgpIgoJfQp9
type: kubernetes.io/dockerconfigjson
9.应用Secret资源
kubectl delete -f harbor-secret.yaml
kubectl create -f harbor-secret.yaml
kubectl get secrets
10.修改镜像tag并上传到harbor
docker tag kubeguide/tomcat-app:v1 10.0.0.11:8888/k8s/tomcat-app:v1
docker tag mysql:5.7 10.0.0.11:8888/k8s/mysql:5.7
docker push 10.0.0.11:8888/k8s/tomcat-app:v1
docker push 10.0.0.11:8888/k8s/mysql:5.7
11.修改demo资源配置清单
mysql-dp.yaml
imagePullSecrets:
- name: harbor-secret
tomcat-dp.yaml
imagePullSecrets:
- name: harbor-secret
12.应用资源清单并查看
kubectl apply -f ./
kubectl get pod
第15章 POD控制器
1.控制器作用
1.pod类型的资源,删除pod后,不会重建
2.替用户监视并保证相应的节点上始终有用户所期望的副本数量的pod在运行
3.如果所运行的pod副本数超过了用户期望的,那么控制器就会删掉,直到和用户期望的一致
4.如果所运行的pod副本数低于用户期望的,那么控制器就会创建,直到和用户期望的一致
2.常用控制器类型
ReplicationController RC:
ReplicationSet RS:
按用户期望的副本创建pod,并始终保持相应数量副本
Deployment:
Deployment通过控制RS来保证POD始终保持相应的数量副本
支持滚动更新,回滚,回滚默认保留10个版本
提供声明式配置,支持动态修改
管理无状态应用最理想的控制器
node节点可能会运行0个或多个POD
DeamonSet:
一个节点只运行一个,必须是始终运行的状态
StatefulSet:
有状态应用
Job:
只运行一次的任务,不需要一直运行的任务
确认任务完成才会退出
Cronjob:
周期性的任务
4.ReplicationSet控制器
4.1编写RS控制器资源配置清单
cat >nginx-rs.yaml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx-rs
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-containers
image: nginx:1.14.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
EOF
4.2.应用RS资源配置清单
kubectl create -f nginx-rs.yaml
4.3.查看RS资源
kubectl get rs
kubectl get pod -o wide
4.4.修改yaml文件应用修改
vim nginx-rs.yaml
kubectl apply -f nginx-rs.yaml
4.5.动态修改配置 扩容 收缩 升级
kubectl edit rs nginx
kubectl scale rs nginx --replicas=5
5.Deployment控制器
5.1 资源配置清单
cat >nginx-dp.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
name: nginx-pod
labels:
app: nginx
spec:
containers:
- name: nginx-containers
image: nginx:1.14.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
EOF
5.2.应用资源配置清单
kubectl create -f nginx-dp.yaml
5.3.查看
kubectl get pod -o wide
kubectl get deployments.apps
kubectl describe deployments.apps nginx-deployment
5.4.更新版本
方法1: 命令行根据资源配置清单修改镜像
kubectl set image -f nginx-dp.yaml nginx-containers=nginx:1.16.0
查看有没有更新
kubectl get pod
kubectl describe deployments.apps nginx-deployment
kubectl describe pod nginx-deployment-7c596b4d95-6ztld
方法2: 命令行根据资源类型修改镜像
打开2个窗口:
第一个窗口监控pod状态
kubectl get pod -w
第二个窗口更新操作
kubectl set image deployment nginx-deployment nginx-containers=nginx:1.14.0
查看更新后的deployment信息
kubectl describe deployments.apps nginx-deployment
----------------------------------------------------
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-7c596b4d95 to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled down replica set nginx-deployment-9c74bb6c7 to 1
Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deployment-7c596b4d95 to 2
Normal ScalingReplicaSet 13m deployment-controller Scaled down replica set nginx-deployment-9c74bb6c7 to 0
Normal ScalingReplicaSet 8m30s deployment-controller Scaled up replica set nginx-deployment-9c74bb6c7 to 1
Normal ScalingReplicaSet 8m29s (x2 over 32m) deployment-controller Scaled up replica set nginx-deployment-9c74bb6c7 to 2
Normal ScalingReplicaSet 8m29s deployment-controller Scaled down replica set nginx-deployment-7c596b4d95 to 1
Normal ScalingReplicaSet 8m28s deployment-controller Scaled down replica set nginx-deployment-7c596b4d95 to 0
更新过程:
nginx-deployment-7c596b4d95-8z7kf #老的版本
nginx-deployment-7c596b4d95-6ztld #老的版本
nginx-deployment-9c74bb6c7-pgfxz 0/1 Pending
nginx-deployment-9c74bb6c7-pgfxz 0/1 Pending
nginx-deployment-9c74bb6c7-pgfxz 0/1 ContainerCreating #拉取新版本镜像
nginx-deployment-9c74bb6c7-pgfxz 1/1 Running #运行新POD
nginx-deployment-7c596b4d95-8z7kf 1/1 Terminating #停止一个旧的POD
nginx-deployment-9c74bb6c7-h7mk2 0/1 Pending
nginx-deployment-9c74bb6c7-h7mk2 0/1 Pending
nginx-deployment-9c74bb6c7-h7mk2 0/1 ContainerCreating #拉取新版本镜像
nginx-deployment-9c74bb6c7-h7mk2 1/1 Running #运行新POD
nginx-deployment-7c596b4d95-6ztld 1/1 Terminating #停止一个旧的POD
nginx-deployment-7c596b4d95-8z7kf 0/1 Terminating #等待旧的POD结束
nginx-deployment-7c596b4d95-6ztld 0/1 Terminating #等待旧的POD结束
查看滚动更新状态:
kubectl rollout status deployment nginx-deployment
滚动更新示意图:
5.5.回滚上一个版本
kubectl describe deployments.apps nginx-deployment
kubectl rollout undo deployment nginx-deployment
kubectl describe deployments.apps nginx-deployment
5.6.回滚到指定版本
v1 1.14.0
v2 1.15.0
v3 3.333.3
回滚到v1版本
创建第一版 1.14.0
kubectl create -f nginx-dp.yaml --record
更新第二版 1.15.0
kubectl set image deployment nginx-deployment nginx-containers=nginx:1.15.0
更新第三版 1.99.0
kubectl set image deployment nginx-deployment nginx-containers=nginx:1.16.0
查看所有历史版本
kubectl rollout history deployment nginx-deployment
查看指定历史版本信息
kubectl rollout history deployment nginx-deployment --revision=1
回滚到指定版本
kubectl rollout undo deployment nginx-deployment --to-revision=1
5.7.扩缩容
kubectl scale deployment nginx-deployment --replicas=5
kubectl scale deployment nginx-deployment --replicas=2
第16章 Service控制器
1.Service控制器介绍
Sercice控制器和POD控制器没关系
Sercice控制器可以选择由POD控制器创建的POD资源
2.三种IP
NodeIP :节点对外提供访问的IP
ClusterIP :用来动态发现和负载均衡POD的IP
PodIP :提供POD使用的IP
3.创建ClusterIP
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
selector:
app: nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
type: ClusterIP
4.查看ClusterIP
kubectl get svc
5.创建NodeIP资源配置清单
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
spec:
selector:
app: nginx
ports:
- name: http
port: 8080 #clusterIP的端口号
protocol: TCP
targetPort: 80 #POD暴露的端口
nodePort: 30000 #NodeIP的端口号,也就是对外用户访问的端口号
type: NodePort
6.查看创建的资源
kubectl get svc
7.示意图
第17章 Ingress控制器
1.NodePort缺点
1.没有ingress之前,pod对外提供服务只能通过NodeIP:NodePort的形式,但是这种形式有缺点,一个节点上的PORT不能重复利用。比如某个服务占用了80,那么其他服务就不能在用这个端口了。
2.NodePort是4层代理,不能解析7层的http,不能通过域名区分流量
3.为了解决这个问题,我们需要用到资源控制器叫Ingress,作用就是提供一个统一的访问入口。工作在7层
4.虽然我们可以使用nginx/haproxy来实现类似的效果,但是传统部署不能动态的发现我们新创建的资源,必须手动修改配置文件并重启。
5.适用于k8s的ingress控制器主流的有ingress-nginx和traefik
2.安装部署traefik
2.1 traefik_dp.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- operator: "Exists"
nodeSelector:
kubernetes.io/hostname: node1
containers:
- image: traefik:v1.7.17
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
2.2 traefik_rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
2.3 traefik_svc.yaml
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
3.应用资源配置
kubectl create -f ./
4.查看并访问
kubectl -n kube-system get svc
5.创建traefik的web-ui的ingress规则
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-ui
namespace: kube-system
spec:
rules:
- host: traefik.ui.com
http:
paths:
- path: /
backend:
serviceName: traefik-ingress-service
servicePort: 8080
6.访问测试
traefik.ui.com
7.ingress实验
7.1.实验目标
未使用ingress之前只能通过IP+端口访问:
tomcat 8080
nginx 8090
使用ingress之后直接可以使用域名访问:
traefik.nginx.com:80 --> nginx 8090
traefik.tomcat.com:80 --> tomcat 8080
7.2.创建2个pod和svc
mysql-dp.yaml
mysql-svc.yaml
tomcat-dp.yaml
tomcat-svc.yaml
nginx-dp.yaml
nginx-svc-clusterip.yaml
7.3.创建ingress控制器资源配置清单并应用
cat >nginx-ingress.yaml <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-nginx
namespace: default
spec:
rules:
- host: traefik.nginx.com
http:
paths:
- path: /
backend:
serviceName: nginx-service
servicePort: 80
EOF
cat >tomcat-ingress.yaml<<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-tomcat
namespace: default
spec:
rules:
- host: traefik.tomcat.com
http:
paths:
- path: /
backend:
serviceName: myweb
servicePort: 8080
EOF
kubectl apply -f nginx-ingress.yaml
kubectl apply -f tomcat-ingress.yaml
7.4.查看创建的资源
kubectl get svc
kubectl get ingresses
kubectl describe ingresses traefik-nginx
kubectl describe ingresses traefik-tomcat
7.5.访问测试
traefik.nginx.com
traefik.tomcat.com
第18章 数据持久化
1.Volume介绍
Volume是Pad中能够被多个容器访问的共享目录
Kubernetes中的Volume不Pad生命周期相同,但不容器的生命周期丌相关
Kubernetes支持多种类型的Volume,并且一个Pod可以同时使用任意多个Volume
Volume类型包括:
EmptyDir:Pod分配时创建, K8S自动分配,当Pod被移除数据被清空。用于临时空间等。
hostPath:为Pod上挂载宿主机目录。用于持久化数据。
nfs:挂载相应磁盘资源。
2.EmptyDir实验
cat >emptyDir.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: busybox-empty
spec:
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/busybox/
name: cache-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/busybox/index.html;sleep 3;done"]
volumes:
- name: cache-volume
emptyDir: {}
EOF
3.hostPath实验
3.1 type类型说明
https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
DirectoryOrCreate 目录不存在就自动创建
Directory 目录必须存在
FileOrCreate 文件不存在则创建
File 文件必须存在
3.2 创建hostPath类型volume资源配置清单
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeName: node2
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
4.根据Node标签选择POD创建在指定的Node上
4.1 方法1: 直接选择Node节点名称
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeName: node2
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
4.2 方法2: 根据Node标签选择Node节点
节点添加标签
kubectl label nodes node3 disktype=SSD
资源配置清单
apiVersion: v1
kind: Pod
metadata:
name: busybox-nodename
spec:
nodeSelector:
disktype: SSD
containers:
- name: busybox-pod
image: busybox
volumeMounts:
- mountPath: /data/pod/
name: hostpath-volume
command: ["/bin/sh","-c","while true;do echo $(date) >> /data/pod/index.html;sleep 3;done"]
volumes:
- name: hostpath-volume
hostPath:
path: /data/node/
type: DirectoryOrCreate
5.编写mysql的持久化deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql-dp
namespace: default
spec:
selector:
matchLabels:
app: mysql
replicas: 1
template:
metadata:
name: mysql-pod
namespace: default
labels:
app: mysql
spec:
containers:
- name: mysql-pod
image: mysql:5.7
ports:
- name: mysql-port
containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-volume
volumes:
- name: mysql-volume
hostPath:
path: /data/mysql
type: DirectoryOrCreate
nodeSelector:
disktype: SSD
第19章 PV和PVC
1.PV和PVC介绍
PV是对底层网络共享存储的抽象,将共享存储定义为一种“资源”。
PV由管理员创建和配置
PV只能是共享存储
PVC则是用户对存储资源的一个“申请”。
就像Pod消费Node的资源一样,PVC能够“消费”PV资源
PVC可以申请特定的存储空间和访问模式
2.PV和PVC生命周期
3.实验-创建nfs和mysql的pv及pvc
3.1 master节点安装nfs
yum install nfs-utils -y
mkdir /data/nfs-volume/mysql -p
vim /etc/exports
/data/nfs-volume 10.0.0.0/24(rw,async,no_root_squash,no_all_squash)
systemctl start rpcbind
systemctl start nfs
showmount -e 127.0.0.1
3.2 所有node节点安装nfs
yum install nfs-utils.x86_64 -y
showmount -e 10.0.0.11
3.3 编写并创建nfs-pv资源
cat >nfs-pv.yaml <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /data/nfs-volume/mysql
server: 10.0.0.11
EOF
kubectl create -f nfs-pv.yaml
kubectl get persistentvolume
3.4 创建mysql-pvc资源
cat >mysql-pvc.yaml <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
EOF
kubectl create -f mysql-pvc.yaml
kubectl get pvc
3.5 创建mysql-deployment资源
cat >mysql-dp.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
volumeMounts:
- name: mysql-pvc
mountPath: /var/lib/mysql
- name: mysql-log
mountPath: /var/log/mysql
volumes:
- name: mysql-pvc
persistentVolumeClaim:
claimName: mysql-pvc
- name: mysql-log
hostPath:
path: /var/log/mysql
nodeSelector:
disktype: SSD
EOF
kubectl create -f mysql-dp.yaml
kubectl get pod -o wide
3.6 测试方法
1.创建nfs-pv
2.创建mysql-pvc
3.创建mysql-deployment并挂载mysq-pvc
4.登陆到mysql的pod里创建一个数据库
5.将这个pod删掉,因为deployment设置了副本数,所以会自动再创建一个新的pod
6.登录这个新的pod,查看刚才创建的数据库是否依然能看到
7.如果仍然能看到,则说明数据是持久化保存的
3.7 accessModes字段说明
ReadWriteOnce 单路读写
ReadOnlyMany 多路只读
ReadWriteMany 多路读写
resources 资源的限制,比如至少5G
3.8 volumeName精确匹配
capacity 限制存储空间大小
reclaim policy pv的回收策略
retain pv被解绑后上面的数据仍保留
recycle pv上的数据被释放
delete pvc和pv解绑后pv就被删除
备注:用户在创建pod所需要的存储空间时,前提是必须要有pv存在
才可以,这样就不符合自动满足用户的需求,而且之前在k8s 9.0
版本还可删除pv,这样造成数据不安全性
``
第20章 configMap资源
1.为什么要用configMap?
将配置文件和POD解耦
2.congiMap里的配置文件是如何存储的?
键值对
key:value
文件名:配置文件的内容
3.configMap支持的配置类型
直接定义的键值对
基于文件创建的键值对
4.configMap创建方式
命令行
资源配置清单
5.configMap的配置文件如何传递到POD里
变量传递
数据卷挂载
6.命令行创建configMap
kubectl create configmap --help
kubectl create configmap nginx-config --from-literal=nginx_port=80 --from-literal=server_name=nginx.cookzhang.com
kubectl get cm
kubectl describe cm nginx-config
7.POD环境变量形式引用configMap
kubectl explain pod.spec.containers.env.valueFrom.configMapKeyRef
cat >nginx-cm.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
env:
- name: NGINX_PORT
valueFrom:
configMapKeyRef:
name: nginx-config
key: nginx_port
- name: SERVER_NAME
valueFrom:
configMapKeyRef:
name: nginx-config
key: server_name
EOF
kubectl create -f nginx-cm.yaml
8.查看pod是否引入了变量
[root@node1 ~/confimap]# kubectl exec -it nginx-cm /bin/bash
root@nginx-cm:~# echo ${NGINX_PORT}
80
root@nginx-cm:~# echo ${SERVER_NAME}
nginx.cookzhang.com
root@nginx-cm:~# printenv |egrep "NGINX_PORT|SERVER_NAME"
NGINX_PORT=80
SERVER_NAME=nginx.cookzhang.com
注意:
变量传递的形式,修改confMap的配置,POD内并不会生效
因为变量只有在创建POD的时候才会引用生效,POD一旦创建好,环境变量就不变了
9.文件形式创建configMap
创建配置文件:
cat >www.conf <<EOF
server {
listen 80;
server_name www.cookzy.com;
location / {
root /usr/share/nginx/html/www;
index index.html index.htm;
}
}
EOF
创建configMap资源:
kubectl create configmap nginx-www --from-file=www.conf=./www.conf
查看cm资源
kubectl get cm
kubectl describe cm nginx-www
编写pod并以存储卷挂载模式引用configMap的配置
cat >nginx-cm-volume.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-www
mountPath: /etc/nginx/conf.d/
volumes:
- name: nginx-www
configMap:
name: nginx-www
items:
- key: www.conf
path: www.conf
EOF
测试:
1.进到容器内查看文件
kubectl exec -it nginx-cm /bin/bash
cat /etc/nginx/conf.d/www.conf
2.动态修改
configMap
kubectl edit cm nginx-www
3.再次进入容器内观察配置会不会自动更新
cat /etc/nginx/conf.d/www.conf
nginx -T
10.配置文件内容直接以数据格式直接存储在configMap里
创建config配置清单:
cat >nginx-configMap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: default
data:
www.conf: |
server {
listen 80;
server_name www.cookzy.com;
location / {
root /usr/share/nginx/html/www;
index index.html index.htm;
}
}
blog.conf: |
server {
listen 80;
server_name blog.cookzy.com;
location / {
root /usr/share/nginx/html/blog;
index index.html index.htm;
}
}
应用并查看清单:
kubectl create -f nginx-configMap.yaml
kubectl get cm
kubectl describe cm nginx-config
创建POD资源清单并引用configMap
cat >nginx-cm-volume-all.yaml <<EOF
apiVersion: v1
kind: Pod
metadata:
name: nginx-cm
spec:
containers:
- name: nginx-pod
image: nginx:1.14.0
ports:
- name: http
containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/conf.d/
volumes:
- name: nginx-config
configMap:
name: nginx-config
items:
- key: www.conf
path: www.conf
- key: blog.conf
path: blog.conf
EOF
应用并查看:
kubectl create -f nginx-cm-volume-all.yaml
kubectl get pod
kubectl describe pod nginx-cm
进入容器内并查看:
kubectl exec -it nginx-cm /bin/bash
ls /etc/nginx/conf.d/
cat /etc/nginx/conf.d/www.conf
测试动态修改configMap会不会生效
kubectl edit cm nginx-config
kubectl exec -it nginx-cm /bin/bash
ls /etc/nginx/conf.d/
cat /etc/nginx/conf.d/www.conf
第21章 安全认证和RBAC
API Server是访问控制的唯一入口
在k8s平台上的操作对象都要经历三种安全相关的操作
1.认证操作
http协议 token 认证令牌
ssl认证 kubectl需要证书双向认证
2.授权检查
RBAC 基于角色的访问控制
3.准入控制
进一步补充授权机制,一般在创建,删除,代理操作时作补充
k8s的api账户分为2类
1.实实在在的用户 人类用户 userAccount
2.POD客户端 serviceAccount 默认每个POD都有认真信息
RBAC就是基于角色的访问控制
你这个账号可以拥有什么权限
以traefik举例:
1.创建了账号 ServiceAccount:traefik-ingress-controller
2.创建角色 ClusterRole: traefik-ingress-controller
Role POD相关的权限
ClusterRole namespace级别操作
3.将账户和权限角色进行绑定 traefik-ingress-controller
RoleBinding
ClusterRoleBinding
4.创建POD时引用ServiceAccount
serviceAccountName: traefik-ingress-controller
注意!!!
kubeadm安装的k8s集群,证书默认只有1年
第22章 k8s dashboard
1.官方项目地址
https://github.com/kubernetes/dashboard
2.下载配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
3.修改配置文件
39 spec:
40 type: NodePort
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30000
4.应用资源配置
kubectl create -f recommended.yaml
5.创建管理员账户并应用
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
cat > dashboard-admin.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
kubectl create -f dashboard-admin.yaml
6.查看资源并获取token
kubectl get pod -n kubernetes-dashboard -o wide
kubectl get svc -n kubernetes-dashboard
kubectl get secret -n kubernetes-dashboard
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
7.浏览器访问
https://10.0.0.11:30000
google浏览器打不开就换火狐浏览器
黑科技
8.报错总结
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2020-03-03T09:57:00Z"}
Skipping metric because of error: Metric label not set
原因:
没哟安装metrics监控组建
第23章 prometheus
1.官网地址
https://github.com/prometheus/prometheus
2.监控k8s需要的组件
使用metric-server收集数据给k8s集群内使用,如kubectl,hpa,scheduler等
使用prometheus-operator部署prometheus,存储监控数据
使用kube-state-metrics收集k8s集群内资源对象数据
使用node_exporter收集集群中各节点的数据
使用prometheus收集apiserver,scheduler,controller-manager,kubelet组件数据
使用alertmanager实现监控报警
使用grafana实现数据可视化
metrics-server 主要关注的是资源度量 API 的实现,比如 CPU、文件描述符、内存、请求延时等指标。
kube-state-metrics 主要关注的是业务相关的一些元数据,比如 Deployment、Pod、副本状态等
3.安装部署prometheus
导入镜像
docker load < prom-prometheusv2_2_1.tar
创建命名空间
kubectl create namespace prom
创建资源
cd prometheus
kubectl create -f ./
检查资源
kubectl -n prom get all -o wide
web浏览器查看
http://10.0.0.11:30090/targets
4.安装部署metrics-server
导入镜像
docker load < k8s-gcr-io-addon-resizer1_8_6.tar
docker load < k8s-gcr-io-metrics-server-amd64v0-3-6.tar
创建资源
kubectl create -f ./
检查
kubectl top node
kubectl top pod
5.安装node-exporterv
导入镜像
docker load < prom-node-exporterv0_15_2.tar
创建资源
kubectl create -f ./
查看资源
kubectl -n prom get pod -o wide
kubectl -n prom get svc
浏览器查看
http://10.0.0.12:9100/metrics
http://10.0.0.13:9100/metrics
6.安装kube-state-metrics
导入镜像
docker load < gcr-io-google_containers-kube-state-metrics-amd64v1-3-1.tar
创建资源
kubectl create -f ./
查看
kubectl -n prom get pod
kubectl -n prom get svc
curl 10.1.232.109:8080/metrics
7.安装grafna和k8s-prometheus-adapter
导入镜像
docker load < directxman12-k8s-prometheus-adapter-amd64-latest.tar
docker load < k8s-gcr-io-heapster-grafana-amd64v5_0_4.tar
修改grafana资源配置清单
1 apiVersion: apps/v1
2 kind: Deployment
3 metadata:
4 name: monitoring-grafana
5 namespace: prom
6 spec:
7 selector:
8 matchLabels:
9 k8s-app: grafana
10 replicas: 1
11 template:
创建资源
cd k8s-prometheus-adapter
kubectl create -f ./
检查创建的资源
kubectl -n prom get pod -o wide
kubectl -n prom get svc
浏览器查看
http://10.0.0.11:32725
导入dashboard
https://grafana.com/grafana/dashboards/10000
prometheus查询语句
sum by (name) (rate (container_cpu_usage_seconds_total{image!=""}[1m]))
container_cpu_usage_seconds_total{name =~ "^k8s_POD.*",namespace="default"}
正则表达式:
=~ 模糊匹配
== 完全匹配
!= 不匹配
!~ 不匹配正则表达式
查询语句:
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",kubernetes_io_hostname=~".*",namespace="default"}) by (pod)
翻译:
sum (监控指标{字段1!="字段1配置的值",字段2!="字段2配置的值"}) by (分组字段名)
添加namespace标签后grafana修改图标
sum (container_memory_working_set_bytes{image!="",name=~"^k8s_.*",kubernetes_io_hostname=~"^$Node$",namespace=~"^$Namespace$"}) by (pod)
第24章 HPA资源自动扩容
https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
1.生成测试镜像
创建测试首页
cat index.php
<?php
$x = 0.0001;
for ($i = 0; $i <= 1000000; $i++) {
$x += sqrt($x);
}
echo "OK!";
?>
创建dockerfile
cat dockerfile
FROM php:5-apache
ADD index.php /var/www/html/index.php
RUN chmod a+rx index.php
生成镜像
docker build -t php:v1 .
2.创建php-deployment资源
cat >php-dp.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: php-apache
name: php-apache
namespace: default
replicas: 1
selector:
matchLabels:
run: php-apache
template:
metadata:
labels:
run: php-apache
spec:
containers:
- image: php:v1
imagePullPolicy: IfNotPresent
name: php-apache
ports:
- containerPort: 80
protocol: TCP
resources:
requests:
cpu: 200m
EOF
3.创建hpa资源
cat >php-hpa.yaml<<EOF
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
maxReplicas: 10
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
targetCPUUtilizationPercentage: 50
EOF
4.查看
kubectl get svc
kubectl get pod
kubectl get hpa
5.压测
while true; do wget -q -O- http://10.1.28.100; done
6.观察hpa数据
kubectl get hpa -w
kubectl get pod -w
7.如果觉得操作麻烦,可以使用下面的命令,效果一样
创建dp
kubectl run php-apache --image=php:v1 --requests=cpu=200m --expose --port=80
创建hpa
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
压测
while true; do wget -q -O- http://10.1.28.100; done