前言
Traefik
Traefik是一个用Golang开发的轻量级的Http反向代理和负载均衡器。由于可以自动配置和刷新backend节点,目前可以被绝大部分容器平台支持,例如Kubernetes,Swarm,Rancher等。由于traefik会实时与Kubernetes API交互,所以对于Service的节点变化,traefik的反应会更加迅速。总体来说traefik可以在Kubernetes中完美的运行.
Nginx-Ingress-Controller
Nginx-Ingress-Controller对于绝大多数刚刚接触k8s的人来说都比较熟悉,一个对外暴露service的7层反向代理。目前最新代号0.9.0-beta.15,可见目前nginx-ingress-control仍然处于beta版本。不过接触过的人还是明白nginx-ingress-control强大的Annotate配置,可以为service提供丰富的个性化配置,这点对于traefik来说是目前还无法打到的地步。
部署
Traefik
1. 创建RBAD权限规则
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
2. 定义deployment部署
在这里官方主要定义了两种部署模式,分别是Daemonset和Deployment,它们之间的区别主要是:
- 相比一个节点只部署一个daemonset的traefik,采用deployment会更易于伸缩和扩展;
- Daemonset可以利用
taints
和tolerations
字段在自定义的节点上部署traefik服务; - 采用Daemonset方式,可以在任何节点上访问80和443端口,而使用deployment者必须依赖service里面定义的对象去访问。
由于我们要在kubernetes机群里部署多个ingress服务,这里就统一采用deployment的方式来部署了。
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
args:
- --web
- --kubernetes
- --web.metrics
- --web.metrics.prometheus
- --accesslog
- --accesslog.filepath=/var/log/traefik_access.log
- --traefiklog
- --traefiklog.filepath=/var/log/traefik.log
- --traefiklogsfile=traefik.log
- --api
- --loglevel=INFO
- --metrics.prometheus
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: NodePort
3. 配置Traefik的UI界面
Traefik的UI用的是AngularJS写的一个简洁的界面,启动默认的端口是8080。
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: traefik-ui.minikube
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
4. 使用
对于使用,相信编辑过ingress的人都用过,我这里都不在累述了,主要讲下Traefik在Kubernetes中Annotate
的配置吧。
通用配置
-
kubernetes.io/ingress.class: traefik
Ingress声明,这里声明了ingress后端采用traefik实现,而不是nginx的controller
-
ingress.kubernetes.io/whitelist-source-range: "1.2.3.0/24, fe80::/16"
配置访问白名单,支持ipv4和ipv6
-
ingress.kubernetes.io/auth-type: basic
http认证模式,此处为basic模式
-
ingress.kubernetes.io/auth-secret: mysecret
basic认证的对应的username和password,这里对应的traefik所在kubernetes命名空间里的secrets
前端配置
-
traefik.frontend.rule.type: PathPrefixStrip
对于在前端配置多个路径转发时,必须配置改选项。例如:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cheeses
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- host: cheeses.minikube
http:
paths:
- path: /stilton
backend:
serviceName: stilton
servicePort: http
- path: /cheddar
backend:
serviceName: cheddar
servicePort: http
- path: /wensleydale
backend:
serviceName: wensleydale
servicePort: http
-
traefik.frontend.priority: "3"
配置前端的权重,值越高则优先匹配
-
traefik.frontend.passHostHeader: "false"
关闭传入Hearder
-
traefik.protocol=https
使用https协议
-
traefik.frontend.entryPoints=http,https
同时支持http和https
后端配置
-
traefik.backend.loadbalancer.method=drr
后端Service的负载均衡策略,目前traefik支持的策略包括:wrr(加权轮训调度算法)和drr(动态加权循环调度算法)
-
traefik.backend.loadbalancer.stickiness=true
是否开启负载均衡器的session亲和性
-
traefik.backend.loadbalancer.stickiness.cookieName=NAME
手动配置后端session亲和性的cookie名称
-
traefik.backend.loadbalancer.sticky=true
弃用
健康检查
-
traefik.backend.healthcheck.path=/health
traefik的监控检查路径
-
traefik.backend.healthcheck.interval=5s
健康检查的时间间隔
-
traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
监测某台节点上的服务错误率达到50%时,自动下线该节点。
-
traefik.backend.circuitbreaker: "LatencyAtQuantileMS(50.0) > 50"
监测某台节点上服务的延时大于50ms时,自动下线该节点。
-
traefik.backend.circuitbreaker: "ResponseCodeRatio(500, 600, 0, 600) > 0.5"
监测某台节点上服务返回状态码为[500-600]在[0-600]区间占比超过50%时,自动下线该节点。
Nginx-Ingress-Controller
1. 创建RBAC权限规则和ServiceAccount账号
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: kube-system
2. 创建nginx-ingress-controller的configmap文件
我在这里添加了一个tcp-configmap,用来做tcp端口的代理。
apiVersion: v1
data:
enable-vts-status: "true"
kind: ConfigMap
metadata:
name: ingress-configmap
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ingress-controller-leader-nginx
namespace: kube-system
---
apiVersion: v1
data:
kind: ConfigMap
metadata:
name: tcp-configmap
namespace: kube-system
3. 创建nginx-ingress-controller的default-http-backend
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: default-http-backend
name: default-http-backend
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: default-http-backend
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: default-http-backend
spec:
containers:
- image: index.tenxcloud.com/google_containers/defaultbackend:1.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: default-http-backend
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: kube-system
labels:
k8s-app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
4. 创建nginx-ingress-controller
这里可以关注下nginx-ingress-controller的启动参数--tcp-services-configmap
,该参数会watch我们之前定义的tcp-configmap文件,对定义的tcp端口做4层的负载均衡。值得注意的是,tcp-configmap里面定义的端口是不能重复的。
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
labels:
k8s-app: nginx-ingress-controller
name: nginx-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: nginx-ingress-controller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
k8s-app: nginx-ingress-controller
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=kube-system/ingress-configmap
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmap
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: index.int.jumei.com/gcr.io/google-containers/nginx-ingress-controller-amd64:0.9.0-beta.15
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
securityContext:
privileged: true
dnsPolicy: ClusterFirst
serviceAccount: nginx-ingress-serviceaccount
serviceAccountName: nginx-ingress-serviceaccount
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: kube-system
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
selector:
k8s-app: nginx-ingress-controller
sessionAffinity: None
type: ClusterIP
5. 使用
对于ingress的使用我这里留到下次更文总结。
总结
这两天通过对Traefik和Nginx-ingress-controller的使用, 发现绝大多数场景Traefik都能满足我的需求,不的不说这个后起之秀是要把前辈们拍到沙滩上的节奏啊,Traefik对于Kubernetes友好和迅捷的支持还是值得大家去体验一下。目前Traefik在Github上的star已经超过了1.2W,发布了200多个releases,可见热度不低。另外Traefik对于那种每天处于多次添加、删除、被killed,升级、降级折腾的微服务来说,简直都是神器啊,完全不用关心去配置traefik的后端,因为它已经自动帮你监听了service的变化,并轻松的更改其配置和热重新加载,这也从官网给出http代理图可见一二。