初识kubernetes第七周

一. 日志收集案例-容器内置日志收集

容器内置日志收集架构


93ff78f74bf61fe8e2eb78419cc6f13.png

1.1 构建镜像

  • 所需文件
# ll
total 31908
-rwxr-xr-x 1 root root      264 Sep 13 03:48 build-command.sh*
-rwxr-xr-x 1 root root    23611 Sep  8 07:14 catalina.sh*
-rw-r--r-- 1 root root      534 Sep 13 03:15 Dockerfile
-rw-r--r-- 1 root root 32600353 Sep  8 07:14 filebeat-7.12.1-x86_64.rpm
-rw-r--r-- 1 root root      687 Sep 13 03:37 filebeat.yml
-rw-r--r-- 1 root root      149 Sep  8 07:14 myapp.tar.gz
-rwxr-xr-x 1 root root      372 Sep  8 07:14 run_tomcat.sh*
-rw-r--r-- 1 root root     6462 Sep  8 07:14 server.xml
  • Dockerfile
# vim Dockerfile 
FROM harbor.zhao.net/pub-images/tomcat-base:v8.5.43   #使用之前制作的镜像,里面已经安装了filebeat安装包

#ADD filebeat-7.5.1-x86_64.rpm /tmp/  #如果使用未安装filebeat的镜像,则需要这两步
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb   #安装包下载地址https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/f/filebeat/filebeat-7.5.1-amd64.deb
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
ADD myapp.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
ADD filebeat.yml /etc/filebeat/filebeat.yml 
RUN chown  -R tomcat.tomcat /data/ /apps/

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]
  • filebeat.yml
# vim filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/catalina.out
  fields:
    type: filebeat-tomcat-catalina
- type: log
  enabled: true
  paths:
    - /apps/tomcat/logs/localhost_access_log.*.txt 
  fields:
    type: filebeat-tomcat-accesslog
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:

output.kafka:
  hosts: ["172.20.20.25:9092","172.20.20.26:9092","172.20.20.27:9092"]
  required_acks: 1
  topic: "filebeat-web-app1"
  compression: gzip
  max_message_bytes: 1000000
  • run_tomcat.sh
# cat run_tomcat.sh 
#!/bin/bash
/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - tomcat -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
  • 构建镜像并上传至本地仓库
# vim build-command.sh 
#!/bin/bash
nerdctl build -t  harbor.zhao.net/zhao/tomcat-app1:${TAG}  .
nerdctl push harbor.zhao.net/zhao/tomcat-app1:${TAG}

# ./build-command.sh v1-filebeat

1.2 运行web服务

  • 部署服务
# vim 3.tomcat-app1.yaml 
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: web-tomcat-app1-filebeat-deployment-label
  name: web-tomcat-app1-filebeat-deployment
  namespace: web
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-tomcat-app1-filebeat-selector
  template:
    metadata:
      labels:
        app: web-tomcat-app1-filebeat-selector
    spec:
      containers:
      - name: web-tomcat-app1-filebeat-container
        image: harbor.zhao.net/zhao/tomcat-app1:v1-filebeat
        imagePullPolicy: IfNotPresent
        #imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"

#运行yaml文件
# kubectl apply -f 3.tomcat-app1.yaml


# kubectl get pod -n web
NAME                                                   READY   STATUS        RESTARTS      AGE
web-tomcat-app1-filebeat-deployment-7c7bb88cdd-7frzp   1/1     Running       0             15s
  • 创建service
# vim 4.tomcat-service.yaml 
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: web-tomcat-app1-filebeat-service-label
  name: web-tomcat-app1-filebeat-service
  namespace: web
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30092
  selector:
    app: web-tomcat-app1-filebeat-selector

#运行yaml文件
# kubectl apply -f 4.tomcat-service.yaml
# kubectl get svc -n web
NAME                               TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
web-tomcat-app1-filebeat-service   NodePort   10.100.201.25    <none>        80:30092/TCP                 5s
7e89f3687a6148ef1d4c349fdb5c760.png

1.3 验证kafka日志数据

8e4493ea6235c1b6f530c4488a3fcd8.png

1.4 配置logstash服务器

准备conf文件,并重启服务

# vim /etc/logstash/conf.d/filebeat-log-to-es.conf
input {
  kafka {
    bootstrap_servers => "172.20.20.25:9092,172.20.20.26:9092,172.20.20.27:9092"
    topics => ["filebeat-web-app1"]
    codec => "json"
  }
}


output {
  if [fields][type] == "filebeat-tomcat-catalina" {
    elasticsearch {
      hosts => ["172.20.20.21:9200","172.20.20.22:9200","172.20.20.23:9200"]
      index => "filebeat-tomcat-catalina-%{+YYYY.MM.dd}"
    }}

  if [fields][type] == "filebeat-tomcat-accesslog" {
    elasticsearch {
      hosts => ["172.20.20.21:9200","172.20.20.22:9200","172.20.20.23:9200"]
      index => "filebeat-tomcat-accesslog-%{+YYYY.MM.dd}"
    }}

}

#重启服务
# systemctl restart logstash.service

1.5 验证Elasticsearch数据

31d536360ab1451c750ef1337f17092.png

1.6 Kibana展示日志数据

  • 创建索引


    ca3f445da2b20bc670756172e379a56.png
  • 查看日志


    9fb451a73215a737c0c8c330716343a.png
03ab53aab545ce9c710682f2c8ba231.png

二. overlay与underlay通信总结

2.1 overlay简介

  • VxLAN: VxLAN全称是Virtual eXtensible Local Area Network(虚拟扩展本地局域网),主要有Cisco推出,vxlan是一个 VLAN 的扩展协议,是由IETF定义的NVO3(Network Virtualization over Layer 3)标准技术之一,VXLAN的特点是将L2的以太帧封装到UDP报文(即L2 over L4)中,并在L3网络中传输,即使用MAC in UDP的方法对报文进行重新封装,VxLAN 本质上是一种overlay的隧道封装技术,它将L2的以太网帧封装成L4的UDP数据报,然后在L3的网络中传输,效果就像L2的以太网帧在一个广播域中传输一样,实际上L2的以太网帧跨越了L3网络传输,但是缺不受L3网络的限制,vxlan采用24位标识vlan ID号,因此可以支持2^24=16777216个vlan,其可扩展性比vlan强大的多,可以支持大规模数据中心的网络需求。

  • VTEP(VXLAN Tunnel Endpoint vxlan隧道端点) : VTEP是VXLAN网络的边缘设备,是VXLAN隧道的起点和终点,VXLAN对用户原始数据帧的封装和解封装均在VTEP上进行,用于VXLAN报文的封装和解封装,VTEP与物理网络相连,分配的地址为物理网IP地址,VXLAN报文中源IP地址为本节点的VTEP地址,VXLAN报文中目的IP地址为对端节点的VTEP地址,一对VTEP地址就对应着一个VXLAN隧道,服务器上的虚拟交换机(隧道flannel.1就是VTEP),比如一个虚拟机网络中的多个vxlan就需要多个VTEP对不同网络的报文进行封装与解封装。

  • VNI(VXLAN Network Identifier):VXLAN网络标识VNI类似VLAN ID,用于区分VXLAN段,不同VXLAN段的虚拟机不能直接二层相互通信,一个VNI表示一个租户,即使多个终端用户属于同一个VNI,也表示一个租户。

  • NVGRE: Network Virtualization using Generic Routing Encapsulation,主要支持者是Microsoft,与VXLAN不同的是,NVGRE没有采用标准传输协议(TCP/UDP),而是借助通用路由封装协议(GRE),NVGRE使用GRE头部的低24位作为租户网络标识符(TNI),与VXLAN一样可以支持1777216个vlan。

2.2 overlay通信流程

image.png
  1. VM A发送L2 帧与VM请求与VM B通信。

  2. 源宿主机VTEP添加或者封装VXLAN、 UDP及IP头部报文。

  3. 网络层设备将封装后的报文通过标准的报文在三层网络进行转发到目标主机。

  4. 目标宿主机VTEP删除或者解封装VXLAN、 UDP及IP头部。

  5. 将原始L2帧发送给目标VM

2.3 overlay应用场景

image.png
image.png
  1. 叠加网络/覆盖网络,在物理网络的基础之上叠加实现新的虚拟网络,即可使网络的中的容器可以相互通信。

  2. 优点是对物理网络的兼容性比较好,可以实现pod的跨宿主机子网通信。

  3. calico与flannel等网络插件都支持overlay网络。

  4. 缺点是有额外的封装与解封性能开销。

  5. 目前私有云使用比较多。

2.4 underlay简介

image.png
  1. Underlay网络就是传统IT基础设施网络,由交换机和路由器等设备组成,借助以太网协议、路由协议和VLAN协议等驱动,它还是Overlay网络的底层网络,为Overlay网络提供数据通信服务。容器网络中的Underlay网络是指借助驱动程序将宿主机的底层网络接口直接暴露给容器使用的一种网络构建技术,较为常见的解决方案有MAC VLAN、 IP VLAN和直接路由等。

  2. Underlay依赖于网络网络进行跨主机通信。

2.5 underlay实现模式简介

image.png
  1. Mac Vlan模式:
  • MAC VLAN: 支持在同一个以太网接口上虚拟出多个网络接口(子接口), 每个虚拟接口都拥有唯一的MAC地址并可配置网卡子接口IP。
  1. IP VLAN模式:
  • IP VLAN类似于MAC VLAN, 它同样创建新的虚拟网络接口并为每个接口分配唯一的IP地址, 不同之处在于, 每个虚拟接口将共享使用物理接口的MAC地址。

2.6 MAC Vlan工作模式

Private(私有)模式:

  • 在Private模式下,同一个宿主机下的容器不能通信,即使通过交换机再把数据报文转发回来也不行。

VEPA模式:

  • 虚拟以太端口汇聚器(Virtual Ethernet Port Aggregator,简称VEPA),在这种模式下, macvlan内的容器不能直接接收在同一个物理网卡的容器的请求数据包,但是可以经过交换机的(端口回流)再转发回来可以实现通信。

passthru(直通)模式:

  • Passthru模式下该macvlan只能创建一个容器,当运行一个容器后再创建其他容器则会报错。

bridge模式:

  • 在bridge这种模式下,使用同一个宿主机网络的macvlan容器可以直接实现通信,推荐使用此模式。
Underlay架构图.png

网络通信总结:

  • Overlay:基于VXLAN、NVGRE等封装技术实现overlay叠加网络。
  • Macvlan:基于Docker宿主机物理网卡的不同子接口实现多个虚拟vlan,一个子接口就是一个虚拟vlan,容器通过宿主机的路由功能和外网保持通信。

2.7 kubernetes pod通信总结

2.7.1 CNI插件三种模式

image.png

2.7.2 k8s 网络通信模式

Overlay网络:

  • Flannel Vxlan、Calico BGP、Calico Vxlan
  • 将pod 地址信息封装在宿主机地址信息以内,实现跨主机且可跨node子网的通信报文。

直接路由:

  • Flannel Host-gw、Flannel VXLAN Directrouting、 Calico Directrouting
  • 基于主机路由,实现报文从源主机到目的主机的直接转发,不需要进行报文的叠加封装,性能比overlay更好。

Underlay:

  • 需要为pod启用单独的虚拟机网络,而是直接使用宿主机物理网络,pod甚至可以在k8s环境之外的节点直接访问(与node节点的网络被打通),相当于把pod当桥接模式的虚拟机使用,比较方便k8s环境以外的访问访问k8s环境中的pod中的服务,而且由于主机使用的宿主机网络,其性能最好。

三. 网络组件flannel总结

3.1 CNI

从底层网络来看,kubernetes 的网络通信可以分为三层去看待:

  • Pod 内部容器通信;
  • 同主机 Pod 间容器通信;
  • 跨主机 Pod 间容器通信;

对于前两点,其网络通信原理其实不难理解。

  1. 对于 Pod 内部容器通信,由于 Pod 内部的容器处于同一个 Network Namespace 下(通过 Pause 容器实现),即共享同一网卡,因此可以直接通信。
  2. 对于同主机 Pod 间容器通信,Docker 会在每个主机上创建一个 Docker0 网桥,主机上面所有 Pod 内的容器全部接到网桥上,因此可以互通。

对于第三点,跨主机 Pod 间容器通信,Docker 没有给出很好的解决方案,对于 Kubernetes 而言,跨主机 Pod 间容器通信是非常重要的一项工作,但是Kubernetes 没有自己去解决这个问题,而是专注于容器编排问题,对于跨主机的容器通信则是交给了第三方实现,这就是 CNI 机制。

CNI,它全称是 Container Network Interface,即容器网络的 API 接口。kubernetes 网络的发展方向是希望通过插件的方式来集成不同的网络方案,CNI 就是这一努力的结果。CNI 只专注解决容器网络连接和容器销毁时的资源释放,提供一套框架,所以 CNI 可以支持大量不同的网络模式,并且容易实现。平时比较常用的 CNI 实现有 Flannel、Calico等。

CNI 插件通常有三种实现模式:

  • Overlay:靠隧道打通,不依赖底层网络;
  • 路由:靠路由打通,部分依赖底层网络;
  • Underlay:靠底层网络打通,强依赖底层网络;

在选择 CNI 插件时是要根据自己实际的需求进行考量,比如考虑 NetworkPolicy 是否要支持 Pod 网络间的访问策略,可以考虑 Calico、Weave;Pod 的创建速度,Overlay 或路由模式的 CNI 插件在创建 Pod 时比较快,Underlay 较慢;网络性能,Overlay 性能相对较差,Underlay 及路由模式相对较快。

3.2 Flannel的工作原理

flannel最早由CoreOS开发,它是容器编排系统中最成熟的网络插件示例之一。随着CNI概念的兴起,flannel也是最早实现CNI标准的网络插件(CNI标准也是由CoreOS提出的)。flannel的功能非常简单明确,解决容器跨节点访问的问题。flannel的设计目的是为集群中的所有节点重新规划IP地址的使用规则,从而使得集群中的不同节点主机创建的容器都具有全集群“唯一”且“可路由的IP地址”,并让属于不同节点上的容器能够直接通过内网IP通信。
Flannel 实质上就是一种 Overlay 网络,也就是将 TCP 数据包装在另一种网络包里面进行路由转发和通信,目前已经支持 UDP、VxLAN、AWS VPC 和 GCE 路由等数据转发方式。

Flannel会在每一个宿主机上运行名为 flanneld 代理,其负责为宿主机预先分配一个子网,并为 Pod 分配IP地址。Flannel使用Kubernetes 或 etcd 来存储网络配置、分配的子网和主机公共IP等信息。数据包则通过 VXLAN、UDP 或 host-gw 这些类型的后端机制进行转发。

Flannel 规定宿主机下各个Pod属于同一个子网,不同宿主机下的Pod属于不同的子网。

3.3 Flannel 工作模式

Flannel支持3种实现:UDP、VxLAN、host-gw

  • UDP 模式:使用设备 flannel.0 进行封包解包,不是内核原生支持,频繁地内核态用户态切换,性能非常差;
  • VxLAN 模式:使用 flannel.1 进行封包解包,内核原生支持,性能较强;
  • host-gw 模式:无需 flannel.1 这样的中间设备,直接宿主机当作子网的下一跳地址,性能最强;

host-gw的性能损失大约在10%左右,而其他所有基于VxLAN“隧道”机制的网络方案,性能损失在20%~30%左右。

3.3.1 UDP 模式

官方已经不推荐使用 UDP 模式,性能相对较差。

UDP 模式的核心就是通过 TUN 设备 flannel0 实现。TUN设备是工作在三层的虚拟网络设备,功能是:在操作系统内核和用户应用程序之间传递IP包。 相比两台宿主机直接通信,多出了 flanneld 的处理过程,这个过程,使用了 flannel0 这个TUN设备,仅在发出 IP包的过程中经过多次用户态到内核态的数据拷贝(linux的上下文切换代价比较大),所以性能非常差,

3.3.2 VxLAN模式

VxLAN是Linux本身支持的一网种网络虚拟化技术。VxLAN可以完全在内核态实现封装和解封装工作,从而通过“隧道”机制,构建出 Overlay 网络(Overlay Network)

VxLAN的设计思想是:在现有的三层网络之上,“覆盖”一层虚拟的、由内核VxLAN模块负责维护的二层网络,使得连接在这个VxLAN二层网络上的“主机”(虚拟机或容器都可以),可以像在同一个局域网(LAN)里那样自由通信。为了能够在二层网络上打通“隧道”,VxLAN会在宿主机上设置一个特殊的网络设备作为“隧道”的两端,叫VTEP:VxLAN Tunnel End Point(虚拟隧道端点)

3.3.3 host-gw模式

Flannel 第三种协议叫 host-gw (host gateway),性能最高,即 Node 节点把自己的网络接口当做 pod 的网关使用,从而使不同节点上的 node 进行通信,这个性能比 VxLAN 高,因为它没有额外开销。不过他有个缺点, 就是各 node 节点必须在同一个网段中 。

3.4 Flannel 通信过程描述

3.4.1 UDP 模式

UDP模式跨主机容器间通信过程如下:


80b8224cadd7af2f4fe2b97016b6156.png

上图是 Flannel 在 UDP 模式下一个数据包经过封包、传输以及拆包的示意图,从这个图中可以看出两台机器的 docker0 分别处于不同的段:10.1.20.1/24 和 10.1.15.1/24 ,如果从 Web App Frontend1 pod(10.1.15.2)去连接另一台主机上的 Backend Service2 pod(10.1.20.3),网络包从宿主机 192.168.0.100 发往 192.168.0.200,内层容器的数据包被封装到宿主机的 UDP 里面,并且在外层包装了宿主机的 IP 和 mac 地址。这就是一个经典的 overlay 网络,因为容器的 IP 是一个内部 IP,无法从跨宿主机通信,所以容器的网络互通,需要承载到宿主机的网络之上。

3.4.2 VxLAN 模式

VxLAN 模式在源容器宿主机中的数据传递过程如下:

  1. 源容器向目标容器发送数据,数据首先发送给 docker0 网桥
    在源容器内容查看路由信息:
$ kubectl exec -it -p {Podid} -c {ContainerId} -- ip route
  1. docker0 网桥接受到数据后,将其转交给flannel.1虚拟网卡处理

docker0 收到数据包后,docker0 的内核栈处理程序会读取这个数据包的目标地址,根据目标地址将数据包发送给下一个路由节点

  1. flannel.1 接受到数据后,对数据进行封装,并发给宿主机的eth0

flannel.1收到数据后,flannelid会将数据包封装成二层以太包。 Ethernet Header的信息:

  • From: 源容器flannel.1虚拟网卡的MAC地址
  • To: 目录容器flannel.1虚拟网卡的MAC地址
  1. 对在flannel路由节点封装后的数据,进行再封装后,转发给目标容器Node的eth0;

由于目前的数据包只是vxlan tunnel上的数据包,因此还不能在物理网络上进行传输。因此,需要将上述数据包再次进行封装,才能源容器节点传输到目标容器节点,这项工作在由linux内核来完成。 Ethernet Header的信息:

  • From: 源容器Node节点网卡的MAC地址
  • To: 目录容器Node节点网卡的MAC地址

IP Header的信息:

  • From: 源容器Node节点网卡的IP地址
  • To: 目录容器Node节点网卡的IP地址

通过此次封装,就可以通过物理网络发送数据包。

在目标容器宿主机中的数据传递过程:

  1. 目标容器宿主机的eth0接收到数据后,对数据包进行拆封,并转发给flannel.1虚拟网卡;
  2. flannel.1 虚拟网卡接受到数据,将数据发送给docker0网桥;
  3. 最后,数据到达目标容器,完成容器之间的数据通信。

四. NetworkPolicy Ingress及Egress简介及案例

4.1 NetworkPolicy简介

基于NetworkPolicy在三层(网络层)或四层(传输层)控制拒绝或允许请求流量。

  • 允许或拒绝特定的pod请求目的namespace中的目的pod的所有或指定端口。
  • 允许或拒绝特定的namespace请求目的namespace中的所有或特定的pod所有或指定端口。
  • 允许或拒绝特定的源IP范围或IP地址请求目的pod的所有或特定端口。


    fe3513612470a61b602a0bc83d00ab4.png

2.2 NetworkPolicy环境准备

2.2.1 环境条件

环境:

  1. kubernetes 1.24.x, calico网络组件, 2个node节点或以上。
  2. 两个namespace, linux和python, 分别代表不同项目的Pod。
  3. 每个namespace 运行多个pod, 且pod可以运行在不同的node主机。
  4. 测试环境为每个namespace分别运行一个nginx和一个tomcat pod, 用于测试不同主机的Pod运行在同一个ns的场景、 以及跨ns的访问通信及访问限制。
#创建namespace
# kubectl create ns linux
namespace/linux created
# kubectl create ns python
namespace/python created
# kubectl label ns linux nsname=linux
namespace/linux labeled
# kubectl label ns python nsname=python
namespace/python labeled

#检查
# kubectl get ns linux --show-labels 
NAME    STATUS   AGE    LABELS
linux   Active   5m2s   kubernetes.io/metadata.name=linux,nsname=linux

# kubectl get ns python --show-labels 
NAME     STATUS   AGE    LABELS
python   Active   5m6s   kubernetes.io/metadata.name=python,nsname=python

4.2.2 部署测试web服务

  1. linux ns部署nginx和tomcat, 并让nginx可以将来自于/app的请求转发至当前namespace的tomcat pod。
  2. python ns部署nginx和tomcat, 并让nginx可以将来自于/app的请求转发至当前namespace的tomcat pod。
  3. 测试运行在linux ns的nginx能否将请求转发至运行在python ns的tomcat:
  4. 测试容器(创建多个pod用于后期从不同的节点进行测试):
#创建测试pod
# kubectl run net-test-centos-pod1 --image=centos:7.9.2009 sleep 10000000 -n linux
pod/net-test-centos-pod1 created
# kubectl run net-test-centos-pod1 --image=centos:7.9.2009 sleep 10000000 -n python
pod/net-test-centos-pod1 created
# kubectl run centos-net-test1 --image=centos:7.9.2009 sleep 360000
pod/centos-net-test1 created

#检查
# kubectl get pod -n linux -o wide
NAME                   READY   STATUS    RESTARTS   AGE    IP               NODE            NOMINATED NODE   READINESS GATES
net-test-centos-pod1   1/1     Running   0          4m9s   10.200.107.235   172.20.20.113   <none>           <none>
# kubectl get pod -n python -o wide
NAME                   READY   STATUS    RESTARTS   AGE    IP             NODE        NOMINATED NODE   READINESS GATES
net-test-centos-pod1   1/1     Running   0          4m2s   10.200.36.75   worker-01   <none>           <none>
# kubectl get pod -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE        NOMINATED NODE   READINESS GATES
centos-net-test1                           1/1     Running   0          3m51s   10.200.36.76     worker-01   <none>           <none>

部署linux ns下的服务

#准备yaml文件
# vim nginx.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: linux-nginx-deployment-label
  name: linux-nginx-deployment
  namespace: linux
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linux-nginx-selector
  template:
    metadata:
      labels:
        app: linux-nginx-selector
    spec:
      containers:
      - name: linux-nginx-container
        image: nginx:1.20.2-alpine 
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: linux-nginx-service-label
  name: linux-nginx-service
  namespace: linux
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30008
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30443
  selector:
    app: linux-nginx-selector

# vim tomcat.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: linux-tomcat-app1-deployment-label
  name: linux-tomcat-app1-deployment
  namespace: linux
spec:
  replicas: 1
  selector:
    matchLabels:
      app: linux-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: linux-tomcat-app1-selector
    spec:
      containers:
      - name: linux-tomcat-app1-container
        image: tomcat:7.0.109-jdk8-openjdk 
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: linux-tomcat-app1-service-label
  name: linux-tomcat-app1-service
  namespace: linux
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30007
  selector:
    app: linux-tomcat-app1-selector

#执行
# kubectl apply -f nginx.yaml 
deployment.apps/linux-nginx-deployment created
service/linux-nginx-service created
# kubectl apply -f tomcat.yaml 
deployment.apps/linux-tomcat-app1-deployment created
service/linux-tomcat-app1-service created

#检查
# kubectl get pod -n linux -o wide 
NAME                                            READY   STATUS    RESTARTS   AGE    IP               NODE            NOMINATED NODE   READINESS GATES
linux-nginx-deployment-5d57fd55bb-826dl         1/1     Running   0          2m5s   10.200.107.236   172.20.20.113   <none>           <none>
linux-tomcat-app1-deployment-5646755db6-d59gb   1/1     Running   0          119s   10.200.36.78     worker-01       <none>           <none>
net-test-centos-pod1                            1/1     Running   0          16m    10.200.107.235   172.20.20.113   <none>           <none>

# kubectl get svc -n linux
NAME                        TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
linux-nginx-service         NodePort   10.100.207.93   <none>        80:30008/TCP,443:30443/TCP   3m18s
linux-tomcat-app1-service   NodePort   10.100.215.51   <none>        80:30007/TCP                 3m11s

部署python ns下的服务

#准备yaml文件
# vim nginx.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: python-nginx-deployment-label
  name: python-nginx-deployment
  namespace: python
spec:
  replicas: 1
  selector:
    matchLabels:
      app: python-nginx-selector
  template:
    metadata:
      labels:
        app: python-nginx-selector
        project: python
    spec:
      containers:
      - name: python-nginx-container
        image: nginx:1.20.2-alpine
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: python-nginx-service-label
  name: python-nginx-service
  namespace: python
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30016
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30456
  selector:
    app: python-nginx-selector
    project: python #一个或多个selector,至少能匹配目标pod的一个标签

# vim tomcat.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: python-tomcat-app1-deployment-label
  name: python-tomcat-app1-deployment
  namespace: python
spec:
  replicas: 1
  selector:
    matchLabels:
      app: python-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: python-tomcat-app1-selector
    spec:
      nodeName: 172.20.20.113
      containers:
      - name: python-tomcat-app1-container
        image: tomcat:7.0.109-jdk8-openjdk 
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: python-tomcat-app1-service-label
  name: python-tomcat-app1-service
  namespace: python
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30015
  selector:
    app: python-tomcat-app1-selector

#执行
# kubectl apply -f nginx.yaml 
deployment.apps/python-nginx-deployment created
service/python-nginx-service created
# kubectl apply -f tomcat.yaml 
deployment.apps/python-tomcat-app1-deployment created
service/python-tomcat-app1-service created

#检查
# kubectl get pod -n python  -o wide
NAME                                             READY   STATUS    RESTARTS   AGE    IP               NODE            NOMINATED NODE   READINESS GATES
net-test-centos-pod1                             1/1     Running   0          36m    10.200.36.75     worker-01       <none>           <none>
python-nginx-deployment-748685f845-p44lb         1/1     Running   0          14m    10.200.107.237   172.20.20.113   <none>           <none>
python-tomcat-app1-deployment-6f768dd56f-p2hz5   1/1     Running   0          113s   10.200.107.238   172.20.20.113   <none>           <none>
# kubectl get svc -n python
NAME                         TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
python-nginx-service         NodePort   10.100.102.93   <none>        80:30016/TCP,443:30456/TCP   15m
python-tomcat-app1-service   NodePort   10.100.214.19   <none>        80:30015/TCP                 2m5s

创建nginx测试页面

# kubectl exec  -it linux-nginx-deployment-5d57fd55bb-826dl sh  -n linux 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # cd /usr/share/nginx/html/
/usr/share/nginx/html # ls
50x.html    index.html
/usr/share/nginx/html # echo "networkpolicy-linux-nginx-test" > index.html 


# kubectl exec  -it python-nginx-deployment-748685f845-p44lb sh  -n python
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # echo "networkpolicy-python-nginx-test" > /usr/share/nginx/html/index.html
image.png

image.png

创建tomcat测试页面

# kubectl exec -it python-tomcat-app1-deployment-6f768dd56f-p2hz5 bash -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future versp
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# cd webapps
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat/webapps# mkdir app
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat/webapps# echo "networkpolicy-python-tomcat-test" > app/index.jsp

# kubectl exec -it linux-tomcat-app1-deployment-5646755db6-d59gb bash -n linux 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@linux-tomcat-app1-deployment-5646755db6-d59gb:/usr/local/tomcat# cd webapps
root@linux-tomcat-app1-deployment-5646755db6-d59gb:/usr/local/tomcat/webapps# mkdir app
root@linux-tomcat-app1-deployment-5646755db6-d59gb:/usr/local/tomcat/webapps# echo "networkpolicy-linux-tomcat-test" > app/index.jsp
image.png

image.png

4.3 NetworkPolicy应用场景

4.3.1 ingress-以pod为限制单位、 只允同namespace含有特定标签的源pod访问目标pod的所有端口

# vim case1-ingress-podSelector.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access--networkpolicy
  namespace: python
spec:
  policyTypes:
  - Ingress
  podSelector:
    matchLabels:
      app: python-tomcat-app1-selector #对匹配到的目的Pod应用以下规则
  ingress: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
  - from:
    - podSelector:
        matchLabels:
          #app: python-nginx-selector #如果存在多个matchLabel条件,如果存在多个matchLabel条件,是and的关系,即要同时满足条件A、条件B、条件X
          project: "python"

# kubectl apply -f case1-ingress-podSelector.yaml 
networkpolicy.networking.k8s.io/tomcat-access--networkpolicy created

#测试default访问linux、python 命名空间下资源
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# curl http://10.200.107.237/index.html    #正常访问python 命名空间下nginx pod 80端口
networkpolicy-python-nginx-test     
[root@centos-net-test1 /]# curl http://10.200.107.238:8080/app/index.jsp    #访问python命名空间tomcat pod 8080失败
^C
[root@centos-net-test1 /]# curl http://10.200.107.236/index.html   #访问linux命名空间nginx pod 80 端口正常
networkpolicy-linux-nginx-test
[root@centos-net-test1 /]# curl http://10.200.36.78:8080/app/index.jsp    #访问linux命名空间下tomcat pod 8080 端口正常
networkpolicy-linux-tomcat-test

#测试python 命名空间下测试pod访问
# kubectl exec -it -n python net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.237/index.html    #访问python 命名空间下nginx pod 80端口正常
networkpolicy-python-nginx-test
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp  #访问python命名空间tomcat pod 8080失败
^C
[root@net-test-centos-pod1 /]# curl http://10.200.107.236/index.html    #访问linux命名空间nginx pod 80 端口正常
networkpolicy-linux-nginx-test
[root@net-test-centos-pod1 /]# curl http://10.200.36.78:8080/app/index.jsp   #访问linux命名空间下tomcat pod 8080 端口正常
networkpolicy-linux-tomcat-test

#测试linux命名空间下测试pod访问
# kubectl exec -it -n linux net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.237/index.html     #访问python 命名空间下nginx pod 80端口正常
networkpolicy-python-nginx-test
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp    #访问python命名空间tomcat pod 8080失败
^C
[root@net-test-centos-pod1 /]# curl http://10.200.107.236/index.html     #访问linux命名空间nginx pod 80 端口正常
networkpolicy-linux-nginx-test
[root@net-test-centos-pod1 /]# curl http://10.200.36.78:8080/app/index.jsp  #访问linux命名空间下tomcat pod 8080 端口正常
networkpolicy-linux-tomcat-test 

综上:

  1. 不允许从其它namespace访问目标pod, 即默认禁止了跨ns访问目标pod。
  2. 非明确允许的pod, 同namespace 也无法访问
  3. 不允许从宿主机访问目标pod。
  4. 该策略只允许同namespace含有特定标签的源pod访问目标pod, 比如tomcat只允许了有特定标签的源pod nginx访问。
  5. 该策略不影响其它namespace的pod内部之间的相互访问, 即linux的pod与linux的pod访问正常。
  6. 该策略不影响各namespace的pod与非明确禁止的pod之间的访问, 即linux的pod访问python的其它pod也正常。

4.3.2 ingress-以pod加端口为限制单位、 只允同namespace含有特定标签的源pod访问目标pod的指定端口

# vim case2-ingress-podSelector-ns-SinglePort.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access--networkpolicy
  namespace: python
spec:
  policyTypes:
  - Ingress
  podSelector:
    matchLabels:
      app: python-tomcat-app1-selector
  ingress:
  - from:
    - podSelector:
        matchLabels:
          #app: python-nginx-selector #指定访问源的匹配条件,如果存在多个matchLabel条件,是and的关系,即要同时满足条件A、条件B、条件X
          project: "python"
    ports: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
    - protocol: TCP
      port: 8080 #允许通过TCP协议访问目标pod的8080端口,但是其它没有允许的端口将全部禁止访问
      #port: 80

#未启用networkpolicy策略,python命名空间下nginx可以正常访问tomcat 8080
# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#启用
# kubectl apply -f case2-ingress-podSelector-ns-SinglePort.yaml 
networkpolicy.networking.k8s.io/tomcat-access--networkpolicy created

# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.238:8080/app/index.jsp   #此时python命名空间下nginx pod 还可以正常访问
networkpolicy-python-tomcat-test

#修改networkpolicy策略之前的8080为80
# vim case2-ingress-podSelector-ns-SinglePort.yaml
...
 ports: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
    - protocol: TCP
      #port: 8080 #允许通过TCP协议访问目标pod的8080端口,但是其它没有允许的端口将全部禁止访问
      port: 80

# kubectl apply -f case2-ingress-podSelector-ns-SinglePort.yaml 
networkpolicy.networking.k8s.io/tomcat-access--networkpolicy configured

# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.238:8080/app/index.jsp    #此时python命名空间下nginx pod 访问失败
^C

综上:

  1. 只允许指定的源pod访问同namespace目标pod的指定端口
  2. 非允许的端口将被禁止访问

4.3.3 ingress-允许同namespace的所有pod访问当前namespace的目标pod多个指定端口

# vim case3-ingress-podSelector-ns-MultiPort.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access-networkpolicy
  namespace: python
spec:
  policyTypes:
  - Ingress
  podSelector: #目标pod
    matchLabels:
      app: python-tomcat-app1-selector
  ingress:
  - from:
    - podSelector: #匹配源pod,matchLabels: {}为不限制源pod即允许所有pod,写法等同于resources(不加就是不限制)
        matchLabels: {}
    ports: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
    - protocol: TCP
      port: 8080 #允许通过TCP协议访问目标pod的8080端口,但是其它没有允许的端口将全部禁止访问
      #port: 80
    - protocol: TCP
      port: 3306
    - protocol: TCP
      port: 6379

#启用networkpolicy策略
# kubectl apply -f case3-ingress-podSelector-ns-MultiPort.yaml 
networkpolicy.networking.k8s.io/tomcat-access-networkpolicy created

#验证
#普通pod访问测试
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# curl http://10.200.107.238:8080/app/index.jsp
^C

#python 命名空间下pod访问
# kubectl exec -it -n python net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#linux命名空间下pod访问
# kubectl exec -it -n linux net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
^C

拒绝访问的规则

root@k8s-node3:~# iptables-save |grep 6379   #在pod所在的node节点查看,其它主机没有该规则
-A cali-pi-_0JuoNaejpHrnq9ok3ft -p tcp -m comment --comment "cali:SrykeNIareiB4Pzz" -m comment --comment "Policy python/knp.default.tomcat-access-networkpolicy ingress" -m set --match-set cali40s:i_4-mErXOxx3I3EQOxm4JCQ src -m multiport --dports 3306,6379,8080 -j MARK --set-xmark 0x10000/0x10000

综上

  1. 其它namespace无法访问目标ns的pod。

4.3.4 ingress-允许同namespace的所有pod访问当前namespace的目标pod所有端口

# cat case4-ingress-podSelector-ns.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access--networkpolicy
  namespace: python
spec:
  policyTypes:
  - Ingress
  podSelector: #目标pod
    matchLabels: {} #匹配所有目标pod
  ingress:
  - from:
    - podSelector: #匹配源pod,matchLabels: {}为不限制源pod即允许所有pod,写法等同于resources(不加就是不限制)
        matchLabels: {}
    #ports: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
    #- protocol: TCP
    #  port: {} #允许通过TCP协议访问目标pod的8080端口,但是其它没有允许的端口将全部禁止访问
      #port: 80
    #- protocol: TCP
    #  port: 3306
    #- protocol: TCP
    #  port: 6379

#启用networkpolicy策略
# kubectl apply -f case4-ingress-podSelector-ns.yaml 
networkpolicy.networking.k8s.io/tomcat-access--networkpolicy created

#验证
#普通pod访问
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# curl http://10.200.107.238:8080/app/index.jsp
^C

#python命名空间下pod访问
# kubectl exec -it -n python net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#linux命名空间下pod访问
# kubectl exec -it -n linux net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
^C

综上:

  1. 其它namespace无法访问目标ns的pod。
  2. 同namespace的pod可以访问当前namespace中的所有pod的任意端口。

4.3.5 ingress-ipBlock白名单

# vim case5-ingress-ipBlock.yaml.bak 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access--networkpolicy
  namespace: python
spec:
  policyTypes:
  - Ingress
  podSelector: #目标pod
    matchLabels:
      app: python-tomcat-app1-selector
  ingress:
  - from:
#    - podSelector: #匹配源pod,matchLabels: {}为不限制源pod即允许所有pod,写法等同于resources(不加就是不限制)
#        matchLabels: {}
    - ipBlock:
        cidr: 10.200.0.0/16 #白名单,允许访问的地址范围,没有允许的将禁止访问目标pod
        except:
        - 10.200.219.0/24 #在以上范围内禁止访问的源IP地址
        - 10.200.229.0/24 #在以上范围内禁止访问的源IP地址
        - 10.200.107.235/32 #在以上范围内禁止linux ns的测试pod访问
    ports: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
    - protocol: TCP
      port: 8080 #允许通过TCP协议访问目标pod的8080端口,但是其它没有允许的端口将全部禁止访问
      #port: 80
    - protocol: TCP
      port: 3306
    - protocol: TCP
      port: 6379

#执行
# kubectl apply -f case5-ingress-ipBlock.yaml.bak 
networkpolicy.networking.k8s.io/tomcat-access--networkpolicy created

#验证
#普通pod访问
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#python命名空间下pod访问
~# kubectl exec -it -n python net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#linux命名空间下pod访问
# kubectl exec -it -n linux net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
^C

综上:

  1. 只要在白名单范围内没有被except指定禁止的源pod IP,都允许访问。
  2. 在只设置了ipBlock匹配的前提下,其它namespace 中没有在except范围的pod 也可以访问目标pod,及linux ns中的pod只要不在except地址范围内,也可以访问python ns中的pod了。

4.3.6 ingress-namespace Selector-ns选择器

添加标签(文章开头已添加)

# kubectl label ns linux nsname=linux
# kubectl label ns python nsname=python
# vim case6-ingress-namespaceSelector.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: tomcat-access--networkpolicy
  namespace: python
spec:
  policyTypes:
  - Ingress
  podSelector: #目标pod
    matchLabels: {} #允许访问python namespace 中的所有pod
#      app: python-tomcat-app1-selector #可以只允许访问python namespace中指定的pod
  ingress:
  - from:
#    - podSelector: #匹配源pod,matchLabels: {}为不限制源pod即允许所有pod,写法等同于resources(不加就是不限制)
#        matchLabels: {}
#    - ipBlock:
#        cidr: 10.200.0.0/16 #指定禁止访问的源网段
#        except:
#        - 10.200.218.0/24 #在以上范围内禁止访问的源IP地址

    - namespaceSelector:
#        matchLabels: {} #允许所有namespace访问python namespace指定的目标端口或指定的pod加指定端口
        matchLabels:
          nsname: linux #只允许指定的namespace访问
    - namespaceSelector:
        matchLabels:
          nsname: python #只允许指定的namespace访问
    ports: #入栈规则,如果指定目标端口就是匹配全部端口和协议,协议TCP, UDP, or SCTP
    - protocol: TCP
      port: 8080 #允许通过TCP协议访问目标pod的8080端口,但是其它没有允许的端口将全部禁止访问
      #port: 80
    - protocol: TCP
      port: 3306
    - protocol: TCP
      port: 6379

#执行
# kubectl apply -f case6-ingress-namespaceSelector.yaml 
networkpolicy.networking.k8s.io/tomcat-access--networkpolicy created

#验证
#普通pod访问
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# curl http://10.200.107.238:8080/app/index.jsp
^C

#python命名空间下pod访问
# kubectl exec -it -n python net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#linux命名空间下pod访问
# kubectl exec -it -n linux net-test-centos-pod1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test-centos-pod1 /]# curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

综上:

  1. 被明确允许的namespace中的pod可以访问目标pod
  2. 没有明确声明允许的namespace将禁止访问
  3. 没有明确声明允许的话, 即使同一个namespace也禁止访问
  4. 比如只允许了linux和python两个ns, 那么default中的pod将无法访问。

4.3.7 Egress-podSelector-Pod出口方向目的IP及目的端口限制-只允许访问指定的目的地址范围及端口

# vim case7-Egress-ipBlock.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-access-networkpolicy
  namespace: python
spec:
  policyTypes:
  - Egress
  podSelector: #目标pod选择器
    matchLabels:  #基于label匹配目标pod
      app: python-tomcat-app1-selector #匹配python namespace中app的值为python-tomcat-app1-selector的pod,然后基于egress中的指定网络策略进行出口方向的网络限制
  egress:
  - to:
    - ipBlock:
        cidr: 10.200.0.0/16 #允许匹配到的pod出口访问的目的CIDR地址范围
    - ipBlock:
        cidr: 172.31.7.106/32 #允许匹配到的pod出口访问的目的主机
    - ipBlock:
        cidr: 0.0.0.0/0 #允许匹配到的pod出口访问的目的主机
    ports:
    - protocol: TCP
      port: 80 #允许匹配到的pod访问目的端口为80的访问
    - protocol: TCP
      port: 53 #允许匹配到的pod访问目的端口为53 即DNS的解析
    - protocol: UDP
      port: 53 #允许匹配到的pod访问目的端口为53 即DNS的解析

#未启用networkpolicy策略
# kubectl exec -it python-tomcat-app1-deployment-6f768dd56f-p2hz5 bash -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# ping www.jd.com
PING jd-abroad.cdn20.com (220.167.102.98) 56(84) bytes of data.
64 bytes from 98.102.167.220.dial.dy.sc.dynamic.163data.com.cn (220.167.102.98): icmp_seq=1 ttl=127 time=9.65 ms
64 bytes from 98.102.167.220.dial.dy.sc.dynamic.163data.com.cn (220.167.102.98): icmp_seq=2 ttl=127 time=10.9 ms
64 bytes from 98.102.167.220.dial.dy.sc.dynamic.163data.com.cn (220.167.102.98): icmp_seq=3 ttl=127 time=4.91 ms
^C
--- jd-abroad.cdn20.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 6ms
rtt min/avg/max/mdev = 4.913/8.492/10.915/2.585 ms
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# curl http://10.200.107.236/index.html
networkpolicy-linux-nginx-test
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# curl http://10.200.36.78:8080/app/index.jsp
networkpolicy-linux-tomcat-test

#启用
# kubectl apply -f case7-Egress-ipBlock.yaml 
networkpolicy.networking.k8s.io/egress-access-networkpolicy created

#启用networkpolicy策略
# kubectl exec -it python-tomcat-app1-deployment-6f768dd56f-p2hz5 bash -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# ping www.jd.com
PING jd-abroad.cdn20.com (220.167.102.98) 56(84) bytes of data.
^C
--- jd-abroad.cdn20.com ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# curl http://10.200.107.236/index.html
networkpolicy-linux-nginx-test
root@python-tomcat-app1-deployment-6f768dd56f-p2hz5:/usr/local/tomcat# curl http://10.200.36.78:8080/app/index.jsp
^C

综上:

  1. 基于Egress白名单,定义ns中匹配成功的pod可以访问ipBlock指定的地址和ports指定的端口
  2. 匹配成功的pod访问未明确定义在Egress的白名单的其它IP的请求,将拒绝
  3. 没有匹配成功的源pod,主动发起的出口访问请求不受影响

4.3.8 Egress-podSelector-Pod出口方向目的Pod限制-只允许访问指定的pod及端口

# vim case8-Egress-PodSelector.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-access-networkpolicy
  namespace: python
spec:
  policyTypes:
  - Egress
  podSelector: #目标pod选择器
    matchLabels:  #基于label匹配目标pod
      app: python-nginx-selector #匹配python namespace中app的值为python-tomcat-app1-selector的pod,然后基于egress中的指定网络策略进行出口方向的网络限制
  egress:
  - to:
#    - ipBlock:
#        cidr: 10.200.0.0/16 #允许访问的目的CIDR地址范围
#    - ipBlock:
#        cidr: 172.31.7.106/32 #允许访问的目的主机地址
#    - ipBlock:
#        cidr: 10.200.218.4/32 #白名单,允许访问的目的主机地址
    - podSelector: #匹配pod,matchLabels: {}为不限制源pod即允许所有pod,写法等同于resources(不加就是不限制)
        matchLabels:
          app: python-tomcat-app1-selector
    ports:
    - protocol: TCP
      port: 8080 #允许8080端口的访问
    - protocol: TCP
      port: 53 #允许DNS的解析
    - protocol: UDP
      port: 53

#未启用networkpolicy策略
# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.237/index.html
networkpolicy-python-nginx-test
/ # curl http://10.200.107.236/index.html
networkpolicy-linux-nginx-test
/ # curl http://10.200.36.78:8080/app/index.jsp
networkpolicy-linux-tomcat-test
/ # curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

#启动
# kubectl apply -f case8-Egress-PodSelector.yaml 
networkpolicy.networking.k8s.io/egress-access-networkpolicy created

#启用networkpolicy策略
# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.36.78:8080/app/index.jsp
^C
/ # curl http://10.200.107.236/index.html
^C
/ # curl http://10.200.107.237/index.html
networkpolicy-python-nginx-test
/ # curl http://10.200.107.238:8080/app/index.jsp
networkpolicy-python-tomcat-test

综上:
基于podSelector选择器, 限制源pod能够访问的目的pod

  1. 匹配成功的源pod只能访问指定的目的pod的指定端口
  2. 其它没有允许的出口请求将禁止访问

4.3.9 Egress-namespaceSelector

# vim case9-Egress-namespaceSelector.yaml 
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: egress-access-networkpolicy
  namespace: python
spec:
  policyTypes:
  - Egress
  podSelector: #目标pod选择器
    matchLabels:  #基于label匹配目标pod
      app: python-nginx-selector #匹配python namespace中app的值为python-tomcat-app1-selector的pod,然后基于egress中的指定网络策略进行出口方向的网络限制
  egress:
  - to:
#    - ipBlock:
#        cidr: 10.200.0.0/16 #允许访问的目的CIDR地址范围
#    - ipBlock:
#        cidr: 172.31.7.106/32 #允许访问的目的主机地址
#    - ipBlock:
#        cidr: 10.200.218.4/32 #允许访问的目的主机地址
#    - podSelector: #匹配pod,matchLabels: {}为不限制源pod即允许所有pod,写法等同于resources(不加就是不限制)
#        matchLabels:
#          app: python-tomcat-app1-selector

    - namespaceSelector:
        matchLabels:
          nsname: python #指定允许访问的目的namespace
    - namespaceSelector:
        matchLabels:
          nsname: linux #指定允许访问的目的namespace
    ports:
    - protocol: TCP
      port: 8080 #允许80端口的访问
    - protocol: TCP
      port: 53 #允许DNS的解析
    - protocol: UDP
      port: 53

#启用
# kubectl apply -f case9-Egress-namespaceSelector.yaml 
networkpolicy.networking.k8s.io/egress-access-networkpolicy created

#验证
# kubectl exec -it python-nginx-deployment-748685f845-p44lb sh -n python 
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # curl http://10.200.107.236/index.html    #访问linux命名空间下nginx pod 80失败
^C
/ # curl http://10.200.36.78:8080/app/index.jsp    #访问linux命名空间下tomcat pod 8080 正常
networkpolicy-linux-tomcat-test
/ # curl http://10.200.107.237/index.html    #访问自己正常
networkpolicy-python-nginx-test
/ # curl http://10.200.107.238:8080/app/index.jsp    #访问python命名空间下tomcat pod 8080正常
networkpolicy-python-tomcat-test

综上:
限制匹配成功的pod没有访问指定的namespace

  1. 匹配成功的源pod可以访问指定的目标namespace。
  2. 不能访问除指定的namespace以外的其它namespace及外网。
  3. 比如允许指定的源pod访问linux和python ns中的8080、 3306、 6379等端口, 但是其它ns无法访问

五. ingress使用总结

5.1 Kubernetes service 类型

Kubernetes service 类型

  • ExternalName
  • NodePort
  • ClusterIP
  • loadBalancer

5.1.1 ClusterIP

# kubectl explain service.spec.type

ClusterIP:默认的类型,用于 k8s 内部之间的服务访问,即通过内部的 service ip 实现服务间的访问,service IP仅可以在内部访问,不能从外部访问。

5.1.2 NodePort

NodePort:在 cluster IP 的基础之上,通过在每个 node 节点监听一个可以指定宿主机端口(nodePort)来暴露服务,从而允许外部 client 访问 k8s 集群中的服务,nodePort 把外部 client的请求转发至 service 进行处理。

5.1.3 LoadBalancer

LoadBalancer: 主要在公有云如阿里云、AWS 上使用,LoadBalancer 构建在 nodePort 基础之上,通过公有云服务商提供的负载均衡器将 k8s 集群中的服务暴露给集群外部的 client访问

# vim 0.1.loadbalancer-dashboard.yaml 
kind: Service
apiVersion: v1
metadata:
  namespace: kubernetes-dashboard 
  name: dashboard-lb
  labels:
    k8s-app: kubernetes-dashboard
spec:
  ports:
    - protocol: TCP
      port: 8443
      targetPort: 8443
      nodePort: 30063
  type: LoadBalancer
  selector:
     k8s-app: kubernetes-dashboard

# kubectl apply -f 0.1.loadbalancer-dashboard.yaml 
service/dashboard-lb created

# kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
dashboard-lb                LoadBalancer   10.100.212.223   <pending>     8443:30063/TCP   14s    #这里一直pending状态,因为lb需结合公网环境配置,私有环境无法获取外部IP
dashboard-metrics-scraper   ClusterIP      10.100.134.141   <none>        8000/TCP         43d
kubernetes-dashboard        NodePort       10.100.81.11     <none>        443:30000/TCP    43d

5.1.4 ExternalName

ExternalName:用于将 k8s 集群外部的服务映射至 k8s 集群内部访问, 从而让集群内部的pod 能够通过固定的 service name 访问集群外部的服务, 有时候也用于将不同 namespace之间的 pod 通过 ExternalName 进行访问。

# vim 0.2.externalName.yaml 
apiVersion: v1
kind: Service
metadata:
  name: my-external-test-name
  namespace: default
spec:
  type: ExternalName  #service类型
  externalName: www.jd.com   #外部域名

# kubectl apply -f 0.2.externalName.yaml 
service/my-external-test-name created

# kubectl get svc
NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
kubernetes              ClusterIP      10.100.0.1       <none>        443/TCP                      45d
linux70-nginx-service   NodePort       10.100.109.164   <none>        80:30006/TCP,443:30444/TCP   38d
my-external-test-name   ExternalName   <none>           www.jd.com    <none>                       7s


#验证
# kubectl run centos-net-test1 --image=centos:7.9.2009 sleep 360000
pod/centos-net-test1 created
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# ping my-external-test-name
PING jd-abroad.cdn20.com (220.167.102.98) 56(84) bytes of data.
64 bytes from 98.102.167.220.dial.dy.sc.dynamic.163data.com.cn (220.167.102.98): icmp_seq=1 ttl=127 time=6.18 ms
64 bytes from 98.102.167.220.dial.dy.sc.dynamic.163data.com.cn (220.167.102.98): icmp_seq=2 ttl=127 time=6.15 ms

^C--- jd-abroad.cdn20.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 6.156/6.168/6.181/0.079 ms
[root@centos-net-test1 /]# ping www.jd.com
PING jd-abroad.cdn20.com (220.167.102.98) 56(84) bytes of data.
64 bytes from 98.102.167.220.dial.dy.sc.dynamic.163data.com.cn (220.167.102.98): icmp_seq=1 ttl=127 time=6.54 ms
^C
--- jd-abroad.cdn20.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 6.541/6.541/6.541/0.000 ms

Endpoints

#在172.20.20.128上安装redis服务
# apt install redis
# vim /etc/redis/redis.conf
# grep ^bind /etc/redis/redis.conf 
bind 0.0.0.0 ::1
# systemctl restart redis-server.service
# vim 0.3.Endpoints.yaml 
apiVersion: v1
kind: Service
metadata:
  name: mysql-production-server-name
  namespace: default
spec:
  ports:
    - port: 6379
---
kind: Endpoints
apiVersion: v1
metadata:
  name: mysql-production-server-name
  namespace: default
subsets:
  - addresses:
      - ip: 172.20.20.128   #redis服务所在节点
    ports:
      - port: 6379

#验证
# kubectl exec -it centos-net-test1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@centos-net-test1 /]# telnet mysql-production-server-name 6379
Trying 10.100.66.125...
Connected to mysql-production-server-name.
Escape character is '^]'.
info
$3246
# Server
redis_version:5.0.7
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:66bd629f924ac924
redis_mode:standalone
os:Linux 5.4.0-156-generic x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:9.3.0
process_id:27719
run_id:3ec2adbdc1719a4c922b1c3e26de8fb5f1e304cc
tcp_port:6379
uptime_in_seconds:10
uptime_in_days:0
hz:10
configured_hz:10
lru_clock:358728
executable:/usr/bin/redis-server
config_file:/etc/redis/redis.conf

Service 如果是 cluster 类型那么从 clusterIP 到 pod 是默认是 TCP 协议,TCP 支持MySQL、Redis 等特定服务,另外还有 UDP 和 SCTP 协议。

5.2 ingress简介

Ingress: https://kubernetes.io/zh/docs/concepts/services-networking/ingress/

  • Ingress 是 kubernetes API 中的标准资源类型之一, ingress 实现的功能是在应用层对客户端请求的 host 名称或请求的 URL 路径把请求转发到指定的 service 资源的规则, 即用于将 kubernetes 集群外部的请求资源转发之集群内部的 service, 再被 service 转发之 pod处理客户端的请求。

Ingress controller:https://kubernetes.io/zh/docs/concepts/services-networking/ingress-controllers/

  • Ingress 资源需要指定监听地址、 请求的 host 和 URL 等配置, 然后根据这些规则的匹配机制将客户端的请求进行转发, 这种能够为 ingress 配置资源监听并转发流量的组件称为ingress 控制器(ingress controller), ingress controller 是 kubernetes 的一个附件, 类似于dashboard 或者 flannel 一样, 需要单独部署。


    image.png

5.3 部署 web 服务及 controller

ingress 项目


image.png

5.3.1 部署web服务

# vim tomcat-app1.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: web-tomcat-app1-deployment-label
  name: web-tomcat-app1-deployment
  namespace: web
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: web-tomcat-app1-selector
    spec:
      containers:
      - name: web-tomcat-app1-container
        image: tomcat:7.0.94-alpine 
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: web-tomcat-app1-service-label
  name: web-tomcat-app1-service
  namespace: web
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40003
  selector:
    app: web-tomcat-app1-selector

# vim tomcat-app2.yaml 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: web-tomcat-app2-deployment-label
  name: web-tomcat-app2-deployment
  namespace: magedu
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-tomcat-app2-selector
  template:
    metadata:
      labels:
        app: web-tomcat-app2-selector
    spec:
      containers:
      - name: web-tomcat-app2-container
        image: tomcat:7.0.94-alpine 
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: web-tomcat-app2-service-label
  name: web-tomcat-app2-service
  namespace: web
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40004
  selector:
    app: web-tomcat-app2-selector

#启动
# kubectl apply -f tomcat-app1.yaml 
deployment.apps/web-tomcat-app1-deployment created
service/web-tomcat-app1-service created
# kubectl apply -f tomcat-app2.yaml 
deployment.apps/web-tomcat-app2-deployment created
service/web-tomcat-app2-service created

#准备测试页面
# kubectl exec -it web-tomcat-app1-deployment-69445659bd-frq84 bash -n web
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4# cd webapps/
bash-4.4# mkdir app1
bash-4.4# echo "tomcat-app1-ingress-nginx-test" > app1/index.jsp

# kubectl exec -it web-tomcat-app2-deployment-5cdf5bdc4-4pr2r bash -n web
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4# cd webapps/
bash-4.4# mkdir app2
bash-4.4# echo "tomcat-app2-ingress-nginx-test" > app2/index.jsp

测试


image.png
image.png

5.3.2 部署ingress-nginx-controller

#准备yaml文件
# grep 'image:' deploy.yaml   #下面的容器镜像由于国内无法下载,要先准备
        image: registry.k8s.io/ingress-nginx/controller:v1.8.2@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b

#下载容器镜像
# nerdctl pull anjia0532/google-containers.ingress-nginx.controller:v1.8.2
# nerdctl pull  dyrnq/kube-webhook-certgen:v20230407   #本来想用上面同一家的,但上传的时候报错,这家也是,重启harbor后,都还是报错,清除了下载节点没使用的镜像后,使用就成功了,上面那家的应该也可以,可能是我这边下载节点服务器当时有问题,报错信息 FATA[0000] failed to create a tmp single-platform image "harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407-tmp-reduced-platform": content digest sha256:10f855b03c8aee4fb0b9b7031c333640d684bd9ee6045f11f9892c7fea394701: not found

#打标签
# nerdctl tag anjia0532/google-containers.ingress-nginx.controller:v1.8.2 harbor.zhao.net/baseimages/controller:v1.8.2
# nerdctl tag dyrnq/kube-webhook-certgen:v20230407 harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407

#上传
# nerdctl push harbor.zhao.net/baseimages/controller:v1.8.2
# nerdctl push harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407

# grep 'image:' deploy.yaml 
        image: harbor.zhao.net/baseimages/controller:v1.8.2
        image: harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407
        image: harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407
#完整yaml文件
# vim deploy.yaml 
apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-nginx-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  strategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.8.2
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: harbor.zhao.net/baseimages/controller:v1.8.2
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.8.2
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.8.2
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: harbor.zhao.net/baseimages/kube-webhook-certgen:v20230407
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.2
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None
#启动
# kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

#检查
# kubectl get pod -n ingress-nginx 
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-jrjv9       0/1     Completed   0          62s
ingress-nginx-admission-patch-tn8m9        0/1     Completed   1          62s
ingress-nginx-controller-f78cfb85d-qxrb6   1/1     Running     0          61s
# kubectl get svc -n ingress-nginx 
NAME                                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.100.130.219   <none>        80:46681/TCP,443:62506/TCP   74s
ingress-nginx-controller-admission   ClusterIP   10.100.91.232    <none>        443/TCP                      74s

5.3.3 配置负载均衡

root@k8s-ha1:~# vim /etc/haproxy/haproxy.cfg 
...
listen ingress_nginx-80
  bind 172.20.20.191:80
  mode tcp
  server 172.20.20.101 172.20.20.111:46681 check inter 3s fall 3 rise 3
  server 172.20.20.102 172.20.20.112:46681 check inter 3s fall 3 rise 3
  server 172.20.20.103 172.20.20.113:46681 check inter 3s fall 3 rise 3


listen ingress_nginx-443
  bind 172.20.20.191:443
  mode tcp
  server 172.20.20.101 172.20.20.111:62506 check inter 3s fall 3 rise 3
  server 172.20.20.102 172.20.20.112:62506 check inter 3s fall 3 rise 3
  server 172.20.20.103 172.20.20.113:62506 check inter 3s fall 3 rise 3

root@k8s-ha1:~# systemctl restart haproxy.service
root@k8s-ha1:~# ss -ntpl|grep 172.20.20.191
LISTEN    0         4096         172.20.20.191:443              0.0.0.0:*        users:(("haproxy",pid=357796,fd=12))                                           
LISTEN    0         4096         172.20.20.191:80               0.0.0.0:*        users:(("haproxy",pid=357796,fd=11)) 

5.3.4 实现单 host 及多 host 的 ingress

基于客户端请求的 host 域名进行转发


image.png

5.3.4.1 实现单个虚拟主机

# cat 2.1.ingress_single-host.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: web
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-body-size: "50m" ##客户端上传文件,最大大小,默认为20m
    #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写
    nginx.ingress.kubernetes.io/app-root: /index.html

#spec:
#  rules: #路由规则
#  - host: www.zhao.com ##客户端访问的host域名
#    http:
#      paths:
#      - path:
#        backend:
#          serviceName: web-nginx-service  #转发至哪个service
#          servicePort: 80 ##转发至service的端口号

spec:
  rules:
  - host: www.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web-tomcat-app1-service 
            port:
              number: 80

# kubectl apply -f 2.1.ingress_single-host.yaml 
ingress.networking.k8s.io/nginx-web created

添加本地hosts解析


image.png

访问


image.png

5.3.4.2 实现多个虚拟主机

# vim 2.2.ingress_multi-host.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: web
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-body-size: "10m" ##客户端上传文件,最大大小,默认为20m
    #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写
    nginx.ingress.kubernetes.io/app-root: /index.html
#spec:
#  rules:
#  - host: www.zhao.com
#    http:
#      paths:
#      - path:
#        backend:
#          serviceName: web-tomcat-app1-service
#          servicePort: 80

#  - host: mobile.zhao.com
#    http:
#      paths:
#      - path:
#        backend:
#          serviceName: web-tomcat-app2-service 
#          servicePort: 80

spec:
  rules:
  - host: www.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web-tomcat-app1-service
            port:
              number: 80


  - host: mobile.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web-tomcat-app2-service
            port:
              number: 80

# kubectl apply -f 2.2.ingress_multi-host.yaml 
ingress.networking.k8s.io/nginx-web created
# kubectl get ingress -n web
NAME        CLASS    HOSTS                          ADDRESS   PORTS   AGE
nginx-web   <none>   www.zhao.com,mobile.zhao.com             80      9s

修改解析并访问


image.png

5.3.5 实现基于 URL 的 ingress

基于客户端请求的同一个 host 不同的 URL 进行转发

# cat 3.1.ingress-url.yaml 

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: web
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/use-regex: "true" ##指定后面rules定义的path可以使用正则表达式
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "600" ##连接超时时间,默认为5s
    nginx.ingress.kubernetes.io/proxy-send-timeout: "600" ##后端服务器回转数据超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-read-timeout: "600" ##后端服务器响应超时时间,默认为60s
    nginx.ingress.kubernetes.io/proxy-body-size: "10m" ##客户端上传文件,最大大小,默认为20m
    #nginx.ingress.kubernetes.io/rewrite-target: / ##URL重写
    nginx.ingress.kubernetes.io/app-root: /index.html
spec:
  rules:
#  - host: www.jiege.com
#    http:
#      paths:
#      - path: /magedu
#        backend:
#          serviceName: magedu-tomcat-app1-service
#          servicePort: 80
#      - path: /magedu2
#        backend:
#          serviceName: magedu-tomcat-app2-service 
#          servicePort: 80

  - host: www.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/app1"
        backend:
          service:
            name: web-tomcat-app1-service
            port:
              number: 80

      - pathType: Prefix
        path: "/app2"
        backend:
          service:
            name: web-tomcat-app2-service
            port:
              number: 80

# kubectl apply -f 3.1.ingress-url.yaml 
ingress.networking.k8s.io/nginx-web created
# kubectl get ingress -n web
NAME        CLASS    HOSTS          ADDRESS   PORTS   AGE
nginx-web   <none>   www.zhao.com             80      10s

访问


image.png

5.3.6 ingress 实现单域名及多域名 https

5.3.6.1 ingress单域名https访问

#签发证书
# mkdir -p /data/certs
# cd /data/certs
# openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'
# openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.zhao.com'
# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt

# 创建secret
# kubectl create secret tls tls-secret-www --cert=server.crt --key=server.key -n web
# vim 4.1.ingress-https-magedu_single-host.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web
  namespace: web
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/ssl-redirect: 'true' #SSL重定向,即将http请求强制重定向至https,等于nginx中的全站https
spec:
  tls:
  - hosts:
    - www.zhao.com
    secretName: tls-secret-www
#  rules:
#  - host: www.jiege.com
#    http:
#      paths:
#      - path: /
#        backend:
#          serviceName: magedu-tomcat-app1-service
#          servicePort: 80

  rules:
  - host: www.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web-tomcat-app1-service
            port:
              number: 80

# kubectl apply -f 4.1.ingress-https-magedu_single-host.yaml 
ingress.networking.k8s.io/nginx-web created
image.png

5.3.6.2 ingress 多域名https访问

#创建mobile证书
# openssl req -new -newkey rsa:4096 -keyout mobile.key -out mobile.csr -nodes -subj '/CN=mobile.zhao.com'
# openssl x509 -req -sha256 -days 3650 -in mobile.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out mobile.crt

# 创建secret
# kubectl create secret tls tls-secret-mobile --cert=mobile.crt --key=mobile.key -n web

#查看
# kubectl describe secret -n web
Name:         tls-secret-mobile
Namespace:    web
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1667 bytes
tls.key:  3272 bytes


Name:         tls-secret-www
Namespace:    web
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1663 bytes
tls.key:  3272 bytes
# vim 4.2.ingress-https-magedu_multi-host.yaml 
#apiVersion: networking.k8s.io/v1beta1
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-web-mobile
  namespace: web
  annotations:
    kubernetes.io/ingress.class: "nginx" ##指定Ingress Controller的类型
    nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
  tls:
  - hosts:
    - mobile.zhao.com
    secretName: tls-secret-mobile 
  - hosts:
    - www.zhao.com
    secretName: tls-secret-www
  rules:
  - host: www.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web-tomcat-app1-service
            port:
              number: 80


  - host: mobile.zhao.com
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: web-tomcat-app2-service
            port:
              number: 80

# kubectl apply -f 4.2.ingress-https-magedu_multi-host.yaml 
ingress.networking.k8s.io/nginx-web-mobile created

访问


image.png
image.png

5.4 ingress证书更新

5.4.1 创建新证书

#删除原证书文件,重新生成新证书
root@k8s-master2:/data# rm  -rf certs/
root@k8s-master2:/data# ls
root@k8s-master2:/data# mkdir certs
root@k8s-master2:/data# cd certs/
root@k8s-master2:/data/certs# ls
root@k8s-master2:/data/certs# openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 3560 -nodes -subj '/CN=www.ca.com'
Generating a RSA private key
..................................................................++++
.......................................................................................................++++
writing new private key to 'ca.key'
-----
root@k8s-master2:/data/certs# openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=www.zhao.com'
Generating a RSA private key
...............................................................++++
.......................................++++
writing new private key to 'server.key'
-----
root@k8s-master2:/data/certs# openssl x509 -req -sha256 -days 3650 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt
Signature ok
subject=CN = www.zhao.com
Getting CA Private Key

使用base64加密工具将server.crt和server.key内容进行加密
Base64 在线编码解码 | Base64 加密解密 - Base64.us
https://base64.us/

image.png

将base64加密后的crt、key值替换到tls-secret-www中的tls.crt、tls.key

# kubectl edit secrets -n web tls-secret-www
secret/tls-secret-www edited

测试


image.png
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,772评论 6 477
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,458评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,610评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,640评论 1 276
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,657评论 5 365
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,590评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,962评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,631评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,870评论 1 297
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,611评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,704评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,386评论 4 319
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,969评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,944评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,179评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,742评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,440评论 2 342

推荐阅读更多精彩内容

  • 一、日志收集案例-容器内置日志收集 二、overlay与underlay通信总结 Overlay网络是通过网络虚拟...
    少年_c3dc阅读 84评论 0 0
  • Pod是Kubernetes调度的最小单元。一个Pod可以包含一个或多个容器,因此它可以被看作是内部容器的逻辑宿主...
    伊凡的一天阅读 5,373评论 0 1
  • 更新链接: https://github.com/mushuanli/wsue/wiki/Kubernetes-%...
    李穆阅读 1,280评论 0 0
  • 前言 下面是 16 道 Kubernetes 面试题。非标准答案,如有错误地方请指出。目的是帮助大家温习K8S。 ...
    YP小站阅读 2,553评论 0 8
  • Kubernetest网络 各容器之间的网络交互 一个容器想要与外界做到互通就需要一套网络栈也就是它发出、响应网络...
    minute_5阅读 420评论 0 1