fabric多机部署——4+1模式5台虚拟机部署
一、首先准备5台虚拟机:
下面的前几步都是在单机(peer0.org1.com节点的服务器上)上配置文件,配置好后会将文件依次复制到其他各自的节点上。
在进行下面第二步之前,先要执行./network_setup.sh down关掉之前的单机部署。
二、生成公私钥、证书、创世区块等
公私钥和证书是用于Server和Server之间的安全通信,另外要创建Channel并让其他节点加入Channel就需要创世区块,这些必备文件都可以一个命令生成,官方已经给出了脚本:
./generateArtifacts.sh mychannel
运行结果如下:
运行这个命令后,系统会创建channel-artifacts文件夹,里面包含了mychannel这个通道相关的文件,另外还有一个crypto-config文件夹,里面包含了各个节点的公私钥和证书的信息。
三、设置peer节点的docker-compose文件
e2e_cli中提供了多个yaml文件,我们可以基于docker-compose-cli.yaml文件创建(注意,这里的peer节点文件有4个,分别是peer0.org1, peer1.org1, peer0.org2, peer1.org2。后面会依次表明所有这4个节点的peer文件配置内容 ):
cp docker-compose-cli.yaml docker-compose-peer.yaml
然后修改docker-compose-peer.yaml,去掉orderer的配置,只保留peer和cli,因为我们要多机部署,节点与节点之前又是通过主机名通讯,所以需要修改容器中的host文件,也就是extra_hosts设置,修改后的peer(准确的来说,这里的peer是peer0.org1节点,剩余的其他3个peer节点会在这个peer0.org1节点的配置文件上再修改)配置如下:
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
- "peer1.org1.example.com:192.168.121.143"
- "peer0.org2.example.com:192.168.121.144"
- "peer1.org2.example.com:192.168.121.145"
在单击模式下,4个peer会映射主机不同的端口,但是我们在多机部署的时候是不需要映射不同端口的,所以需要修改base/docker-compose-base.yaml文件,将所有peer的端口映射都改为相同的:
ports:
- 7051:7051
- 7052:7052
- 7053:7053
四、设置orderer节点的docker-compose文件
与创建peer的配置文件类似,我们也复制一个yaml文件出来进行修改:
cp docker-compose-cli.yaml docker-compose-orderer.yaml
orderer服务器上我们只需要保留order设置,其他peer和cli设置都可以删除。orderer可以不设置extra_hosts。配置文件结果如下:
五、分发配置文件
前面4步的操作,我们都是在peer0.org1.com上完成的,接下来我们需要将这些文件分发到另外4台服务器上。Linux之间的文件传输,我们可以使用scp命令。
先登录orderer.example.com,将本地的e2e_cli文件夹删除:
rm e2e_cli –R
然后再登录到peer0.org1.com服务器上,退回到examples文件夹,因为这样可以方便的把其下的e2e_cli文件夹整个传到orderer服务器上。
scp -r e2e_cli fabric@10.51.120.220:/home/fabric/go/src/github.com/hyperledger/fabric/examples/
以上对于orderer.examples.com配置文件复制完毕。下面依次再次运行scp命令,复制到peer1.org1.example.com上,然后我们需要对docker-compose-peer.yaml做一个小小的修改,将启动的容器改为peer1.org1.example.com,并且添加peer0.org1.example.com的IP映射,对应的cli中也改成对peer1.org1.example.com的依赖。这是修改后的peer1.org1.example.com上的配置文件:
version: '2'
services:
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer1.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer1.org1.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
- "peer1.org1.example.com:192.168.121.143"
- "peer0.org2.example.com:192.168.121.144"
- "peer1.org2.example.com:192.168.121.145"
接下来继续使用scp命令将peer0.org1.com上的文件夹传送给peer0.org2.example.com和peer1.org2.example.com,然后也是修改一下docker-compose-peer.yaml文件,使得其启动对应的peer节点。
peer0.org2.example.com的docker-compose-peer.yaml配置文件:
version: '2'
services:
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer0.org2.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
- "peer1.org1.example.com:192.168.121.143"
- "peer0.org2.example.com:192.168.121.144"
- "peer1.org2.example.com:192.168.121.145"
peer0.org2.example.com的docker-compose-peer.yaml配置文件:
version: '2'
services:
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer1.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org2MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- peer1.org2.example.com
extra_hosts:
- "orderer.example.com:192.168.121.141"
- "peer0.org1.example.com:192.168.121.142"
- "peer1.org1.example.com:192.168.121.143"
- "peer0.org2.example.com:192.168.121.144"
- "peer1.org2.example.com:192.168.121.145"
六、启动fabric
现在所有文件都已经准备完毕,可以启动Fabric网络了。
6.1启动orderer
让我们首先来启动orderer节点,在orderer服务器上运行:docker-compose -f docker-compose-orderer.yaml up -d
运行完毕后我们可以使用docker ps看到运行了一个名字为orderer.example.com的节点。类似以下:
6.2启动peer
然后我们切换到peer0.org1.example.com服务器,启动本服务器的peer节点和cli,命令为:
docker-compose -f docker-compose-peer.yaml up -d
运行完毕后我们使用docker ps应该可以看到4个正在运行的容器。
现在我们整个Fabric4+1服务器网络已经成型,接下来是创建channel和运行ChainCode。
6.3创建Channel测试ChainCode
我们切换到peer0.org1.example.com(锚节点)服务器上,使用该服务器上的cli来运行创建Channel和运行ChainCode的操作。首先进入cli容器:
docker exec -it cli bash
进入容器后我们可以看到命令提示变为:
root@b41e67d40583:/opt/gopath/src/github.com/hyperledger/fabric/peer#
说明我们已经以root的身份进入到cli容器内部。官方已经提供了完整的创建Channel和测试ChainCode的脚本,并且已经映射到cli容器内部,所以我们只需要在cli内运行如下命令:
./scripts/script.sh mychannel
那么该脚本就可以一步一步的完成创建通道,将其他节点加入通道,更新锚节点,创建ChainCode,初始化账户,查询,转账,再次查询等链上代码的各个操作都可以自动化实现。直到最
后,系统提示:
=============== All GOOD, End-2-End execution completed ================
说明我们的4+1的Fabric多级部署成功了。我们现在是 peer0.org1.example.com的cli容器内,我们也可以切换到peer0.org2.example.com服务器,运行docker ps命令,可以看到本来是2个容器的,现在已经变成了3个容器,因为ChainCode会创建一个容器。
以上为fabric4+1多机部署所有步骤。
完。