RabbitMQ集群搭建(docker)

1. RabbitMQ集群搭建

1.1 镜像

  • docker pull rabbitmq:management
  • docker pull centos:7

1.2 docker-compose.yml

version: '2'
services:
  ha-rabbitmq01:
    image: rabbitmq:management
    container_name: ha-rabbitmq01
    restart: always
    hostname: ha-rabbitmq01
    privileged: true
    environment:
      - "RABBITMQ_NODENAME=ha-rabbitmq01"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "5672:5672"
      - "15672:15672"
    networks:
      ha-rabbitmq-network:
        ipv4_address: 172.50.0.2
  ha-rabbitmq02:
    image: rabbitmq:management
    container_name: ha-rabbitmq02
    restart: always
    hostname: ha-rabbitmq02
    privileged: true
    environment:
      - "RABBITMQ_NODENAME=ha-rabbitmq02"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "5673:5672"
      - "15673:15672"
    networks:
      ha-rabbitmq-network:
        ipv4_address: 172.50.0.3
  ha-rabbitmq03:
    image: rabbitmq:management
    container_name: ha-rabbitmq03
    restart: always
    hostname: ha-rabbitmq03
    privileged: true
    environment:
      - "RABBITMQ_NODENAME=ha-rabbitmq03"
    ulimits:
      memlock:
        soft: -1
        hard: -1
    ports:
      - "5674:5672"
      - "15674:15672"
    networks:
      ha-rabbitmq-network:
        ipv4_address: 172.50.0.4
networks:
  ha-rabbitmq-network:
    driver: bridge
    ipam:
      config:
        - subnet: 172.50.0.0/16
          gateway: 172.50.0.1

单个run运行,先创建network,然后指定即可。

docker run -it -d --privileged --name ha-rabbitmq01 -p 15672:15672 -p 25672:25672 -p 5672:5672 -e RABBITMQ_NODENAME=ha-rabbitmq01 -h ha-rabbitmq01 rabbitmq:management

docker run -it -d --privileged --name ha-rabbitmq02 -p 15673:15672 -p 25673:25672 -p 5673:5672  -e RABBITMQ_NODENAME=ha-rabbitmq02  -h ha-rabbitmq02 rabbitmq:management

docker run -it -d --privileged --name ha-rabbitmq03 -p 15674:15672 -p 25674:25672 -p 5674:5672 -e RABBITMQ_NODENAME=ha-rabbitmq03  -h ha-rabbitmq03 rabbitmq:management

1.3 .erlang.cookie同步

more ~/.erlang.cookie  PKBRYYBVWAZHJCAVAHBJ
echo "PKBRYYBVWAZHJCAVAHBJ" > ~/.erlang.cookie  # 看下第一台的cookie是什么

hosts记得要配置。

echo "172.50.0.2      ha-rabbitmq01" >> /etc/hosts
echo "172.50.0.3      ha-rabbitmq02" >> /etc/hosts
echo "172.50.0.4      ha-rabbitmq03" >> /etc/hosts

1.4 集群配置

分别进入rabbitmq2 和rabbitmq3容器(docker exec -it 容器id /bin/bash),执行以下:

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl join_cluster --ram ha-rabbitmq01@ha-rabbitmq01
rabbitmqctl start_app


# 查看集群状态
rabbitmqctl cluster_status

如果出现 Stopping rabbit application on node ha-rabbitmq02@ha-rabbitmq02 ... Error: unable to perform an operation on node 'ha-rabbitmq02@ha-rabbitmq02'. Please see diagnostics information and suggestions below. 这些错误,一般cookie不一致导致。

图片.png

1.5 镜像队列

rabbitmqctl set_policy ha-all "^" '{"ha-mode":"all"}'

或者管理页面配置

图片.png

2. HAproxy环境搭建

https://www.haproxy.org/#down

https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.4.4.tar.gz/

docker run -it -d --privileged --name haproxy-keepalived-v1 centos:7 /usr/sbin/init

2.1 源码安装

yum update
yum install -y gcc wget net-tools make vim initscripts ipvsadm tcpdump
cd /usr/local/src
wget https://src.fedoraproject.org/repo/pkgs/haproxy/haproxy-2.4.4.tar.gz/sha512/a8987e8342fdbec7e48de09a4391a67e77e05493260e0e561e8c185b6457b8e1086cc45ce04ebf3365699c008dff81667490e2fe99c33c0ac3c7513df8ae025c/haproxy-2.4.4.tar.gz
tar -zxvf  haproxy-2.4.4.tar.gz
make TARGET=linux-glibc  PREFIX=/usr/local/haproxy-2.4.4
make install PREFIX=/usr/local/haproxy-2.4.4

可直接 yum install haproxy 安装。本案例使用yum直接安装了。

2.2 配置环境变量

vim /etc/profile

export HAPROXY_HOME=/usr/local/haproxy-2.4.4
export PATH=$PATH:$HAPROXY_HOME/sbin

使得配置的环境变量立即生效:

source /etc/profile

2.3 检查安装是否成功

[root@ha-rabbitmq-haproxy01 haproxy-2.4.4]# haproxy -v
HAProxy version 2.4.4-acb1d0b 2021/09/07 - https://haproxy.org/
Status: long-term supported branch - will stop receiving fixes around Q2 2026.
Known bugs: http://www.haproxy.org/bugs/bugs-2.4.4.html
Running on: Linux 3.10.0-1160.36.2.el7.x86_64 #1 SMP Wed Jul 21 11:57:15 UTC 2021 x86_64

2.4 负载均衡配置

新建配置文件 haproxy.cfg,这里我新建的位置为:/etc/haproxy/haproxy.cfg。

mkdir /etc/haproxy

vim /etc/haproxy/haproxy.cfg

global
  daemon
  maxconn 256
  
defaults
  mode http
  timeout connect 5000ms
  timeout client 5000ms
  timeout server 5000ms
 
listen rabbitmq_cluster
  bind 0.0.0.0:5677
  option tcplog
  mode tcp
  timeout client  3h
  timeout server  3h
  balance leastconn
  server ha-rabbitmq01 ha-rabbitmq01:5672 check inter 2s rise 2 fall 3
  server ha-rabbitmq02 ha-rabbitmq02:5672 check inter 2s rise 2 fall 3
  server ha-rabbitmq03 ha-rabbitmq03:5672 check inter 2s rise 2 fall 3

listen http_front
  bind 0.0.0.0:80
  stats uri /haproxy?stats
 
listen rabbitmq_admin
  bind 0.0.0.0:8001
  server ha-rabbitmq01 ha-rabbitmq01:15672 check inter 2s rise 2 fall 3
  server ha-rabbitmq02 ha-rabbitmq02:15672 check inter 2s rise 2 fall 3
  server ha-rabbitmq03 ha-rabbitmq03:15672 check inter 2s rise 2 fall 3

2.5 启动

haproxy -f /etc/haproxy/haproxy.cfg
echo "172.50.0.2      ha-rabbitmq01" >> /etc/hosts
echo "172.50.0.3      ha-rabbitmq02" >> /etc/hosts
echo "172.50.0.4      ha-rabbitmq03" >> /etc/hosts

2.6 查看运行

ps aux|grep haproxy
killall haproxy

2.7 查看效果

2.7.1 haproxy监控页面

图片.png

2.7.2 rabbitmq web管理页面

图片.png

2.7.3 amqp 服务

图片.png

或者访问 http://106.12.203.184:8001/api/vhosts

2.8 打包成镜像

将所有安装好之后,可以打包成镜像。

docker commit -a 'gan' -m 'haproxy-keepalived-v1' haproxy-keepalived-v1 ha-haproxy-v1
docker rm -f haproxy-keepalived-v1

docker run -it -d --privileged --name ha-haproxy01 -p 5677:5677 -p 8181:80 -p 8001:8001 --network harabbitmq_ha-rabbitmq-network --ip 172.50.0.5 --add-host ha-rabbitmq01:172.50.0.2 --add-host ha-rabbitmq02:172.50.0.3 --add-host ha-rabbitmq03:172.50.0.4 ha-haproxy-v1 /usr/sbin/init

docker run -it -d --privileged --name ha-haproxy02 -p 5688:5677 -p 8191:80 -p 8011:8001 --network harabbitmq_ha-rabbitmq-network --ip 172.50.0.6 --add-host ha-rabbitmq01:172.50.0.2 --add-host ha-rabbitmq02:172.50.0.3 --add-host ha-rabbitmq03:172.50.0.4  ha-haproxy-v1 /usr/sbin/init

3. Keepalived 环境搭建

3.1 安装

yum -y install keepalived
echo "172.50.0.2      ha-rabbitmq01" >> /etc/hosts
echo "172.50.0.3      ha-rabbitmq02" >> /etc/hosts
echo "172.50.0.4      ha-rabbitmq03" >> /etc/hosts

3.2 启动

vim /etc/keepalived/haproxy_check.sh

#!/bin/bash

A=`ps -C haproxy --no-header |wc -l`
if [ $A -eq 0 ] ; then
    systemctl start haproxy
fi

sleep 3

if [ $A -eq 0 ] ; then
    systemctl stop keepalived
fi

vim /etc/keepalived/keepalived.conf

global_defs {
         notification_email {
                 acassen@firewall.loc
                 failover@firewall.loc
                 sysadmin@firewall.loc
         }
         notification_email_from Alexandre.Cassen@firewall.loc
         smtp_server 192.168.50.131
         smtp_connect_timeout 30
         router_id LVS_DEVEL
}

vrrp_script chk_haproxy {
    script "/etc/keepalived/haproxy_check.sh"
    interval 5
    weight 10
}

vrrp_instance  VI_1 {
    state  MASTER
    interface  eth0
        virtual_router_id  100
        priority  100
        advert_int  1
        authentication {
        auth_type  PASS
        auth_pass  123456
    }
    track_script {
        chk_haproxy
    }
    unicast_src_ip 172.50.0.5
    unicast_peer {
        172.50.0.6
    }
    virtual_ipaddress {
        172.50.0.100
    }
}
 keepalived -f /etc/keepalived/keepalived.conf
 ps aux|grep keepalived

如果出现,宿主机也需要安装的。

# 如果之前已经安装了就不要安装了,直接配置就好
yum install -y ipvsadm
# enable IP forward
echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf

sysctl -p
touch /etc/sysconfig/ipvsadm
systemctl start ipvsadm
systemctl enable ipvsadm

ping

[root@ha-haproxy01 /]# ping 172.50.0.100
PING 172.50.0.100 (172.50.0.100) 56(84) bytes of data.
64 bytes from 172.50.0.100: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from 172.50.0.100: icmp_seq=2 ttl=64 time=0.017 ms
^C
--- 172.50.0.100 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.016/0.016/0.017/0.004 ms

3.3 验证故障转移

安装 tcpdump 包

yum install tcpdump -y
tcpdump -i eth0 vrrp -n

看master上的IP情况如下:

[root@ha-haproxy01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
173: eth0@if174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.50.0.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:5/64 scope link 
       valid_lft forever preferred_lft forever

杀掉master上的keepalived

pkill keepalived

这个时候VIP到了ha-haproxy02这台了。

[root@ha-rabbitmq-haproxy01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
173: eth0@if174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:5/64 scope link 
       valid_lft forever preferred_lft forever
[root@ha-haproxy02 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
171: eth0@if172: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.6/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.50.0.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:6/64 scope link 
       valid_lft forever preferred_lft forever

启动master的keepalived,这个时候VIP又过来了。

[root@ha-haproxy01 /]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
173: eth0@if174: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:32:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.50.0.5/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.50.0.100/32 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe32:5/64 scope link 
       valid_lft forever preferred_lft forever

3.4 常见问题

  1. Can't initialize ipvs: Protocol not available Are you sure that IP Virtual Server is built in the kernel or as module?

     需要安装ipvsadm模块。
    
  2. bash: ip: command not found

    容器内运行没有命令,yum -y install initscripts 即可。

  1. Keepalived_healthcheckers exited with permanent error FATAL. Terminating

    这个错误基本就是keepalived.conf配置有误

  1. 好多人在容器中运行没有权限的,一般是容器没加上 --privileged
  1. 客户端连接代理地址的时候,会出现超时或断线的情况
    那是因为haproxy机制导致,会将不活跃的tcp连接给主动关闭掉。所以我们可以根据业务去调整心连接时间。 timeout client 3h timeout server 3h, 或者程序去做短线重连。

参考资料

Docker环境下搭建Rabbitmq+Haproxy+Keepalived高可用负载均衡集群

docker下用keepalived+Haproxy实现高可用负载均衡集群

RabbitMQ 高可用集群搭建

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,293评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,604评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,958评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,729评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,719评论 5 366
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,630评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,000评论 3 397
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,665评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,909评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,646评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,726评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,400评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,986评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,959评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,197评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,996评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,481评论 2 342