Istio流量治理

一、Traffic-Management-Basics

  • ms-demo
  1. 01-demoapp-v10
cd istio-in-practise/Traffic-Management-Basics/ms-demo/01-demoapp-v10
# 部署后端demoapp
kubectl apply -f deploy-demoapp.yaml
# 切换到istio目录,部署客户端pod
kubectl apply -f istio-1.13.3/samples/sleep/sleep.yaml
# 进入客户端sleep的pod中访问后端服务
kubectl exec -it sleep-698cfc4445-ldm4l -- sh
curl demoappv10:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.12!
# 查看客户端关联demoapp的endpoint
istioctl pc endpoint sleep-698cfc4445-ldm4l | grep demoapp
192.168.104.10:8080              HEALTHY     OK                outbound|8080||demoappv10.default.svc.cluster.local
192.168.104.12:8080              HEALTHY     OK                outbound|8080||demoappv10.default.svc.cluster.local
192.168.166.143:8080             HEALTHY     OK                outbound|8080||demoappv10.default.svc.cluster.local
# 创建proxy(前端代理)的pod
kubectl apply -f deploy-proxy.yaml
# 进入客户端sleep的pod中访问proxy服务
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.10!
 - Took 631 milliseconds.

kiali流量截图

image.png

此时客户端pod访问时,流量直接由客户端curl命令所在pod的sidecar-envoy直接转发给这三个demoapp的pod上来,而不再经由service调度,从而把流量由底层的service的转发调度,提升为网格自己的sidecar内部来转发和调度,底层的service于此处将不再发挥作用;部署的proxy的pod可以由网格内所有的sidecar所发现,包括route、cluster、endpoint(可以用istioctl pc命令查看)

  1. 02-demoapp-v11
cd istio-in-practise/Traffic-Management-Basics/ms-demo/02-demoapp-v11
# 部署新版本demoapp-v11的pod
kubectl apply -f deploy-demoapp-v11.yaml
# 部署demoapp的service
kubectl apply -f service-demoapp.yaml
# 重新部署proxy的pod
kubectl apply -f deploy-proxy.yaml
# 进入客户端sleep的pod中访问proxy服务
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.12!
 - Took 61 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.12!
 - Took 12 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.14!
 - Took 16 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.147!
 - Took 12 milliseconds.
# 查看以demoapp结尾的endpoint信息,包括demoapp-v10的三个pod,demoapp-v11的三个pod
istioctl pc endpoint sleep-698cfc4445-ldm4l | grep "demoapp\>"
192.168.104.10:8080              HEALTHY     OK                outbound|8080||demoapp.default.svc.cluster.local
192.168.104.12:8080              HEALTHY     OK                outbound|8080||demoapp.default.svc.cluster.local
192.168.104.14:8080              HEALTHY     OK                outbound|8080||demoapp.default.svc.cluster.local
192.168.166.143:8080             HEALTHY     OK                outbound|8080||demoapp.default.svc.cluster.local
192.168.166.147:8080             HEALTHY     OK                outbound|8080||demoapp.default.svc.cluster.local
# 部署virtualservice
kubectl apply -f virutalservice-demoapp.yaml
# 进入客户端sleep的pod中访问proxy服务
while true; do curl proxy; curl proxy/canary; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.152!
 - Took 13 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.18!
 - Took 8 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.16!
 - Took 8 milliseconds.

kiali部署vs之前流量截图

image.png

kiali部署vs之后流量截图

image.png

重新部署proxy之后,访问demoapp的service,既有v10版本的demoapp,又有v11版本的demoapp

  1. 03-demoapp-subset
cd istio-in-practise/Traffic-Management-Basics/ms-demo/03-demoapp-subset
# 删除svc demoappv10和demoappv11
kubectl delete svc demoappv10 demoappv11
# 创建demoapp的子集
 kubectl apply -f destinationrule-demoapp.yaml
# 重新定义vs
kubectl apply -f virutalservice-demoapp.yaml
# 流量验证v10
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.152!
 - Took 8 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.16!
 - Took 7 milliseconds.
# 流量验证v11
while true; do curl proxy/canary; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.150!
 - Took 7 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.18!
 - Took 7 milliseconds.

重新定义vs之后,借助于destinationrule,把一个svc属于多个不通版本的pod划分为多个不同子集,从而实现了一个服务借助于子集的逻辑,来区分不同版本,还可以作为多个不同的路由目标使用

  1. 04-proxy-gateway
cd istio-in-practise/Traffic-Management-Basics/ms-demo/04-proxy-gateway
# 创建gw
kubectl apply -f .
# 宿主机添加域名解析,访问demoapp
while true; do curl proxy.test.com ; sleep 1; done

kiali网关内部和外部同时访问

image.png
  1. 05-url-redirect-and-rewrite
cd istio-in-practise/Traffic-Management-Basics/ms-demo/05-url-redirect-and-rewrite
# 部署backend应用
kubectl apply -f deploy-backend.yaml
# 创建vs-proxy
kubectl apply -f virtualservice-proxy.yaml
# 验证流量
curl proxy
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
 - Took 1196 milliseconds.
# 验证流量backend重定向(redirect)
curl -I proxy/backend
HTTP/1.1 301 Moved Permanently
location: http://backend:8082/
date: Tue, 24 May 2022 08:42:51 GMT
server: envoy
transfer-encoding: chunked
# 创建vs-demoapp
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量v10
curl demoapp:8080
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
# 验证流量v11
curl demoapp:8080/canary
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
# 验证rewrite
curl -I demoapp:8080/backend
HTTP/1.1 301 Moved Permanently
location: http://backend:8082/
date: Tue, 24 May 2022 08:51:02 GMT
server: envoy
transfer-encoding: chunked
  1. 06-weight-based-routing
cd istio-in-practise/Traffic-Management-Basics/ms-demo/06-weight-based-routing
# 删除05步骤中的两个vs,部署新的vs
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量分割
while true; do curl proxy; sleep 0.$RANDOM; done
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
 - Took 50 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
 - Took 24 milliseconds.
Proxying value: iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
 - Took 53 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
 - Took 14 milliseconds.
Proxying value: iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
 - Took 33 milliseconds.

kiali流量验证

image.png
  1. 07-headers-operation
cd istio-in-practise/Traffic-Management-Basics/ms-demo/07-headers-operation
# 应用vs-demoapp
kubectl apply -f virtualservice-demoapp.yaml
# 验证demoappv10响应头
curl -I demoapp:8080
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 115
server: envoy
date: Tue, 24 May 2022 09:04:30 GMT
x-envoy-upstream-service-time: 115
x-envoy: test
# 验证demoappv11响应头
curl -I -H "x-canary: true" demoapp:8080
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 116
server: envoy
date: Tue, 24 May 2022 09:05:00 GMT
x-envoy-upstream-service-time: 46
x-canary: true
  1. 08-fault-injection
cd istio-in-practise/Traffic-Management-Basics/ms-demo/08-fault-injection
# 应用故障注入vs
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量中断故障
while true; do curl demoapp:8080/canary; sleep 0.$RANDOM; done
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
fault filter abortiKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
# 验证流量延迟故障
while true; do curl demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
  1. 09-http-retry
cd istio-in-practise/Traffic-Management-Basics/ms-demo/09-http-retry
# 应用重试vs
kubectl apply -f .
# 验证延迟重试
while true; do curl demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
# 验证中断重试
while true; do curl demoapp:8080/canary; sleep 0.$RANDOM; done
fault filter abortiKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
iKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-sn9p5, ServerIP: 192.168.166.157!
fault filter abortiKubernetes demoapp v1.1 !! ClientIP: 127.0.0.6, ServerName: demoappv11-77755cdc65-dzr78, ServerIP: 192.168.104.28!
  1. 10-traffic-mirror
cd istio-in-practise/Traffic-Management-Basics/ms-demo/10-traffic-mirror
# 应用vs
kubectl apply -f virtualservice-demoapp.yaml
# 验证流量
while true; do curl demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-x6889, ServerIP: 192.168.166.159!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-tv7gc, ServerIP: 192.168.104.23!
# 验证镜像流量
kubectl logs -f --tail 1 demoappv11-77755cdc65-dzr78
127.0.0.6 - - [24/xxx/20xx xx:17:25] "GET / HTTP/1.1" 200 -
127.0.0.6 - - [24/xxx/20xx xx:17:26] "GET / HTTP/1.1" 200 -
127.0.0.6 - - [24/xxx/20xx xx:17:26] "GET / HTTP/1.1" 200 -
  1. 11-cluster-loadbalancing
cd istio-in-practise/Traffic-Management-Basics/ms-demo/11-cluster-loadbalancing
# 应用dr
kubectl apply -f destinationrule-demoapp.yaml
# 更新vs
kubectl apply -f ../03-demoapp-subset/virutalservice-demoapp.yaml
# 验证流量,加上头部信息,访问固定后端应用
while true; do curl -H "X-User: test" demoapp:8080; sleep 0.$RANDOM; done
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.6, ServerName: demoappv10-b5d9576cc-844db, ServerIP: 192.168.104.24!

二、Bookinfo示例

  • 部署bookinfo
cd istio-1.13.3
# 拆除第一章节中的所有示例,sleep的pod客户端不用拆除
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
# 所有pod正常运行后,验证bookinfo,得到如下结果表示正常
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
<title>Simple Bookstore App</title>
# 部署gateway
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
# 检查istio配置
istioctl analyze
✔ No validation issues found when analyzing namespace: default.

bookinfo示例网址:https://istio.io/latest/docs/setup/getting-started/

  • 验证
    浏览器访问bookinfo页面
image.png

kiali页面验证


image.png
  • 修改请求路由规则
# 配置默认路由规则
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
# 将所有的路由规则都匹配到v1上
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml

kiali流量截图


image.png
# 基于登陆角色限制访问版本
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml

不登陆显示v1

image.png

登陆显示v2

image.png
  • 修改故障注入规则
# 注入延迟故障
kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-delay.yaml

正常访问


image.png

jason登陆延迟

image.png
# 注入中断故障
kubectl apply -f samples/bookinfo/networking/virtual-service-ratings-test-abort.yaml

正常访问

image.png

json登陆rating不可获得

image.png
  • 拆除规则
kubectl delete -f samples/bookinfo/networking/virtual-service-all-v1.yaml
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,242评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,769评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,484评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,133评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,007评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,080评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,496评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,190评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,464评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,549评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,330评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,205评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,567评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,889评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,160评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,475评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,650评论 2 335

推荐阅读更多精彩内容