关闭特定的warning
在使用一些第三方库或源码的时候,经常会遇到编译时产生warnings情况,这些warning不是我们自己的代码产生的,当然也不好去修改,但每次编译都显示一大堆与自己代码无关的警告也着实看着不爽,更麻烦的是还有可能造成自己代码中产生的警告被淹没在多过的无关警告中,而被忽略掉的情况。
所以要想办法关闭这些第三方代码和库产生的警告。
关闭特定的warning可以在编译时通过命令行参数的方式指定,比如 gcc 是在命令行一般是用-Wno-xxxx这样的形式禁止特定的warning,这里xxxx代入特定的警告名。但这种方式相当将所有代码产生的这个warning显示都关闭了,不管是第三方库产生的还是自己的代码产生的,所以这种用法并不适合。
关闭特定的warning还可以在代码中通过添加#pragma指令来实现,用#pragma指令可以对指定的区域的代码关闭指定的warning。
msvc下的用法是这样的
#ifdef _MSC_VER
// 关闭编译CImg.h时产生的警告
#pragma warning( push )
#pragma warning( disable: 4267 4319 )
#endif
#include "CImg.h"
#ifdef _MSC_VER
#pragma warning( pop )
#endif
gcc下的用法是这样的:
#ifdef __GNUC__
// 关闭 using _Base::_Base; 这行代码产生的警告
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Winherited-variadic-ctor"
#endif
.....
namespace cimg_library {
template<typename T>
class CImgWrapper:public CImg<T> {
public:
using _Base =CImg<T>;
using _Base::_Base; // 继承基类构造函数
......
}
} /* namespace cimg_library */
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif
参考资料:
https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
https://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html#Diagnostic-Pragmas
zilliqa启动
运行build/tests/Node/pre_run.sh
python tests/Zilliqa/test_zilliqa_local.py stop
python tests/Zilliqa/test_zilliqa_local.py setup 20
python tests/Zilliqa/test_zilliqa_local.py prestart 10
# clean up persistence storage
rm -rf lookup_local_run/node*
python tests/Zilliqa/test_zilliqa_lookup.py setup 1
- 在stop时候
os.system('fuser -k ' + str(NODE_LISTEN_PORT + x) + '/tcp')
fuser -k 杀掉访问文件的进程。
- setup 20
- 在build/local_run下面创建node_000x命名的20个目录,并将build/tests/Zilliqa/zilliqa和build/tests/Zilliqa/sendcmd拷贝到build/local_run目录下
- prestart 10
- 调用build/tests/Zilliqa/genkeypair生成20组keypair
- 将上面生成的keypair写入到build/local_run/keys.txt中
- 利用前10组的keypair生成build/dsnodes.xml
- 利用前10组的keypair,ip,port生成build/config_normal.xml
- 利用前20组的keypair,ip,port生成build/ds_whitelist.xml
- 利用前20组的keypair的keypair[0]生成build/build/shard_whitelist.xml
- test_zilliqa_lookup.py setup 1(对1个look_up节点进行配置)
- 将build/tests/Zilliqa/zilliqa拷贝到build/lookup_local_run/node_0001/lzilliqa
- 为look_up生成keypair
- 将build/constants_local.xml(该文件拷贝自constants_local.xml)中的node/look_ups下的所有peer/pubkey赋值为上面生成的keypair[0]
- 将上面生成的keypairs写入到build/lookup_local_run/keys.txt
- 使用上面生成的keypairs的keypair[0],ip, port生成build/tests/config_lookup.xml
运行build/tests/Node/test_node_lookup.sh
- test_zilliqa_lookup.py start
- 将build/config_lookup.xml拷贝到build/lookup_local_run/node_0001/config.xml
- 将build/constants_local.xml拷贝到build/lookup_local_run/node_0001/constants.xml
- 将build/dsnodes.xml拷贝到build/lookup_local_run/node_0001/dsnodes.xml
- 执行
cd build/lookup_local_run/node_0001; echo keypair[0] keypair[1] > mykey.txt; ./lzilliqa keypair[1] keypair[0] 127.0.0.1 4001 0 0 0 > ./error_log_zilliqa 2>&1 &
启动look_up节点。
运行build/tests/Node/test_node_simple.sh
- test_zilliqa_local.py start 10(这里的10是DS的节点数量,该命令的执行会运行build/local_run目录下前10个目录作为DS节点运行,其余作为分片节点运行。)
- 将build/ds_whitelist.xml拷贝到build/local_run/node_000x/ds_whitelist.xml,拷贝build/shard_whitelist.xml到build/local_run/node_000x/shard_whitelist.xml,拷贝build/constants_local.xml到build/local_run/node_000x/constants.xml,拷贝build/dsnodes.xml到build/local_run/node_000x/dsnodes.xml。
- 对于前10个DS节点来说,拷贝build/config_normal.xml到build/local_run/node_000x/config.xml。执行
cd build/local_run/node_000x; echo keypair[0] keypair[1] > mykey.txt; ./zilliqa keypair[1] keypair[0] 127.0.0.1 500x 1 0 0 > ./error_log_zilliqa 2>&1 &
- 对于分片几点直接执行
cd build/local_run/node_000x; echo keypair[0] keypair[1] > mykey.txt; ./zilliqa keypair[1] keypair[0] 127.0.0.1 500x 0 0 0 > ./error_log_zilliqa 2>&1 &
上面会将10个DS几点和10个分片节点都运行起来
- 对DS节点执行
tests/Zilliqa/test_zilliqa_local.py sendcmd $ds 01000000000000000000000000000100007F00001389
- 运行
build/tests/Zilliqa/sendcmd 500x cmd 01000000000000000000000000000100007F00001389
- 分片节点运行
tests/Zilliqa/test_zilliqa_local.py startpow $node 10 0000000000000001 05 03 2b740d75891749f94b6a8ec09f086889066608e4418eda656c93443e8310750a e8cc9106f8a28671d91e2de07b57b828934481fadf6956563b963bb8e5c266bf
- $node为分片节点号,10位DS节点数量,0000000000000001为blocknum,05 03分别为DS节点的难度值和分片节点的难度值。后面两个为两个随机数。
"{0:0{1}x}".format(NODE_LISTEN_PORT, 8)
上面中的第一个0表示NODE_LISTEN_PORT,第二个0表示填充字符0,{1}表示的后面的8,x表示转换成十六进制。
- 对每个分片节点执行
build/tests/Zilliqa/sendcmd 500x cmd 0200 0000000000000001 05 03 2b740d75891749f94b6a8ec09f086889066608e4418eda656c93443e8310750a e8cc9106f8a28671d91e2de07b57b828934481fadf6956563b963bb8e5c266bf DS[0].keypair[0] 0000000000000000000000000100007F {0:0{1}x}".format(NODE_LISTEN_PORT + x, 8) ... DS[9].keypair[0] 0000000000000000000000000100007F {0:0{1}x}".format(NODE_LISTEN_PORT + x, 8)
上面的DS[0].keypair[0]表示0号DS节点的keypair[0],...表示从中间8个DS节点其格式都为DS[9].keypair[0] 0000000000000000000000000100007F {0:0{1}x}".format(NODE_LISTEN_PORT + x, 8)。表示分片节点进行pow并将自己的pow结果发送给所有DS节点
。
zilliqa fallbackblock
Fallback Mechanism
We have introduced a fallback mechanism to improve the robustness of the network and ensure that it always makes progress in spite of unforeseen stalls. When the entire system enters a stall for a significant period of time, the first shard in the system is tasked with reviving the network by assuming the responsibilities of the Directory Service (DS) committee. If, after this, the system still fails to make progress, the second shard proceeds to take over. This fallback mechanism is actualized by the candidate fallback shard performing consensus over a new block type — the Fallback Block — designed for this mechanism. Once consensus is achieved, the block is broadcasted to the rest of the network, and a new round of proof-of-work (PoW) submissions begins thereafter.
上面是zilliqablog的解释。是为了增强网络的鲁棒性,在DS失败时候由分片承担起DS服务来恢复整个系统。
fallback block机制简单总结:调用Node::FallbackTimerLaunch()
设置定时器,如果定时时间超过本分片节点等待的时间,调用RunConsensusOnFallback()
进行共识,其他分片节点进入WAITING_FALLBACKBLOCK
状态。RunConsensusOnFallback()
根据节点是leader还是backup分别执行RunConsensusOnFallbackWhenLeader()
和RunConsensusOnFallbackWhenBackup()
,并将自己的状态设置为FALLBACK_CONSENSUS
状态。
RunConsensusOnFallbackWhenLeader()
调用ComposeFallbackBlock()
来组装FallbackBlock到NODE::m_pendingFallbackBlock
中,创建共识对象到NODE::m_consensusObject
中,其中m_classByte=NODE; m_insByte=FALLBACKCONSENSUS
,在StartConsensus中生成对fallback block共识的message时会赋值到message的前两个字节中,接受者会根据insByte选择相应的处理程序进行处理。然后将fallback block的共识消息发送给本分片的所有节点(包括自己)。
RunConsensusOnFallbackWhenBackup()
根据当前的链的信息创建共识对象到NODE::m_consensusObject
中,为在接收到fallback block的共识消息前做准备工作。
分片节点(包含leader节点)接受到leader的fallback block共识message后,根据message的第二个字节ins_byte来调用相应的处理函数,这里调用的是NODE::ProcessFallbackConsensus()
。在NODE::ProcessFallbackConsensus()
中调用m_consensusObject->ProcessMessage(message, offset, from)
进行共识注意处理消息有两种:ConsensusLeader::ProcessMessage(message, offset, from)
和ConsensusBackup::ProcessMessage(message, offset, from)
,共识结束后两轮的多重签名结果在m_consensusObject
中,fallback block存在backup进行共识的分片所有节点的m_consensusObject
中。如果共识成功调用Node::ProcessFallbackConsensusWhenDone()
来完成多重签名验证(其中第一轮的签名结果也会作为第二轮签名的签名内容),根据当前分片结构和fallback block创建FallbackBlockWShardingStructure并存储到m_fallbackBlockDB
中。更新本地节点的m_mediator.m_DSCommittee
和m_mediator.m_ds.m_mode
。调用m_mediator.m_ds->StartNewDSEpochConsensus(true)
等待POW提交和运行DS块的共识。生成fallback block消息并发送给其他分片节点,这里的ins_byte为FALLBACKBLOCK
对应的消息接受者的处理函数为Node::ProcessFallbackBlock
。
Node::ProcessFallbackBlock
经过一通验证,更新本地的DSCommittee信息,重新计时fallback block等待时间,并转发fallback block信息到其他分片节点,开始进行POW。
void Node::FallbackTimerLaunch() {
if (m_fallbackTimerLaunched) {
return; //加载FallbackTimer,如果已经加载了则退出
}
if (!ENABLE_FALLBACK) {
LOG_GENERAL(INFO, "Fallback is currently disabled");
return;
}
LOG_MARKER();
if (FALLBACK_INTERVAL_STARTED < FALLBACK_CHECK_INTERVAL ||
FALLBACK_INTERVAL_WAITING < FALLBACK_CHECK_INTERVAL) {
LOG_GENERAL(FATAL,
"The configured fallback checking interval must be "
"smaller than the timeout value.");
return;
}
m_runFallback = true;
m_fallbackTimer = 0;
m_fallbackStarted = false;
auto func = [this]() -> void {
while (m_runFallback) {
this_thread::sleep_for(chrono::seconds(FALLBACK_CHECK_INTERVAL));
if (m_mediator.m_ds->m_mode != DirectoryService::IDLE) {//如果DS的模式不是空闲状态说明DS没有问题,则直接返回。
m_fallbackTimerLaunched = false;
return;
}
lock_guard<mutex> g(m_mutexFallbackTimer);
/* 是否有个问题就是当某个分片fallback block共识失败后将一直执行第一个if */
if (m_fallbackStarted) {
if (LOOKUP_NODE_MODE) {
LOG_GENERAL(WARNING,
"Node::FallbackTimerLaunch when started is "
"true not expected to be called from "
"LookUp node.");
return;
}
/*
* 由于下面的else中将m_fallbackTimer = 0;所以从新计时
* 如果计时超出FALLBACK_INTERVAL_STARTED则更换本分片的leader
* 然后从新RunConsensusOnFallback()对fallback block进行共识。
*/
if (m_fallbackTimer >= FALLBACK_INTERVAL_STARTED) {
UpdateFallbackConsensusLeader();
auto func = [this]() -> void { RunConsensusOnFallback(); };
DetachedFunction(1, func);
/* 重新计时 */
m_fallbackTimer = 0;
}
} else {
bool runConsensus = false;
if (!LOOKUP_NODE_MODE) {
/* 如果m_fallbackTimer大于本分片应该等待的时间则由本分片担负起DS的职责来恢复整个网络 */
/*
* 假如计时时间超出了第一个分片节点需要等待的时间,第一个分片节点的会执行第一个if判断。
* 由于第一个if中设置了runConsensus = true;不会执行第二个if。
* 第一个if中设置了m_fallbackStarted = true;因此下次循环会进入上边的if (m_fallbackStarted)
*/
if (m_fallbackTimer >=
(FALLBACK_INTERVAL_WAITING * (m_myshardId + 1))) {
/* 开始FallBack block的共识,在共识过程中如果是分片的leader的话则来生成fallback block */
auto func = [this]() -> void { RunConsensusOnFallback(); };
DetachedFunction(1, func);
/* 设置m_fallbackStarted说明开始了fallback的处理 */
m_fallbackStarted = true;
runConsensus = true;
m_fallbackTimer = 0;
m_justDidFallback = true;
}
}
if (m_fallbackTimer >= FALLBACK_INTERVAL_WAITING &&
m_state != WAITING_FALLBACKBLOCK &&
m_state != FALLBACK_CONSENSUS_PREP &&
m_state != FALLBACK_CONSENSUS && !runConsensus) {
SetState(WAITING_FALLBACKBLOCK);
m_justDidFallback = true;
cv_fallbackBlock.notify_all();
}
}
m_fallbackTimer += FALLBACK_CHECK_INTERVAL;
}
};
DetachedFunction(1, func);
m_fallbackTimerLaunched = true;
}
void Node::RunConsensusOnFallback() {
if (LOOKUP_NODE_MODE) {
LOG_GENERAL(WARNING,
"DirectoryService::RunConsensusOnFallback not expected "
"to be called from LookUp node.");
return;
}
LOG_MARKER();
SetLastKnownGoodState(); //将发生fallback前的状态保存到m_fallbackState中
SetState(FALLBACK_CONSENSUS_PREP); //然后将m_state设置为FALLBACK_CONSENSUS_PREP
// Upon consensus object creation failure, one should not return from the
// function, but rather wait for fallback.
bool ConsensusObjCreation = true;
if (m_isPrimary) { //如果是分片中的leader节点则在RunConsensusOnFallbackWhenLeader中通过ComposeFallbackBlock来构造fallback block
ConsensusObjCreation = RunConsensusOnFallbackWhenLeader();
if (!ConsensusObjCreation) {
LOG_GENERAL(WARNING, "Error after RunConsensusOnFallbackWhenShardLeader");
}
} else {
ConsensusObjCreation = RunConsensusOnFallbackWhenBackup();
if (!ConsensusObjCreation) {
LOG_GENERAL(WARNING, "Error after RunConsensusOnFallbackWhenShardBackup");
}
}
if (ConsensusObjCreation) {
SetState(FALLBACK_CONSENSUS);
cv_fallbackConsensusObj.notify_all();
}
}
bool Node::RunConsensusOnFallbackWhenLeader() {
if (LOOKUP_NODE_MODE) {
LOG_GENERAL(WARNING,
"Node::"
"RunConsensusOnFallbackWhenLeader not expected "
"to be called from LookUp node.");
return true;
}
LOG_MARKER();
LOG_EPOCH(INFO, to_string(m_mediator.m_currentEpochNum).c_str(),
"I am the fallback leader node. Announcing to the rest.");
{
lock_guard<mutex> g(m_mutexShardMember);
if (!ComposeFallbackBlock()) { //组装fallback block
LOG_EPOCH(WARNING, to_string(m_mediator.m_currentEpochNum).c_str(),
"Node::RunConsensusOnFallbackWhenLeader failed.");
return false;
}
// Create new consensus object
m_consensusBlockHash =
m_mediator.m_txBlockChain.GetLastBlock().GetBlockHash().asBytes();
//创建共识对象用于共识,ConsensusLeader用于leader进行创建共识对象
m_consensusObject.reset(new ConsensusLeader(
m_mediator.m_consensusID, m_mediator.m_currentEpochNum,
m_consensusBlockHash, m_consensusMyID, m_mediator.m_selfKey.first,
*m_myShardMembers, static_cast<unsigned char>(NODE),//这里就是初始化的ConsensusLeader::m_classByte
static_cast<unsigned char>(FALLBACKCONSENSUS),//这里初始化ConsensusLeader::m_insByte
NodeCommitFailureHandlerFunc(), ShardCommitFailureHandlerFunc()));
}
if (m_consensusObject == nullptr) {
LOG_EPOCH(WARNING, to_string(m_mediator.m_currentEpochNum).c_str(),
"Error: Unable to create consensus leader object");
return false;
}
ConsensusLeader* cl = dynamic_cast<ConsensusLeader*>(m_consensusObject.get());
vector<unsigned char> m;
{
lock_guard<mutex> g(m_mutexPendingFallbackBlock);
//将ComposeFallbackBlock创建的m_pendingFallbackBlock即fallback block序列化到m中
m_pendingFallbackBlock->Serialize(m, 0);
}
std::this_thread::sleep_for(std::chrono::seconds(FALLBACK_EXTRA_TIME));
auto announcementGeneratorFunc =
[this](vector<unsigned char>& dst, unsigned int offset,
const uint32_t consensusID, const uint64_t blockNumber,
const vector<unsigned char>& blockHash, const uint16_t leaderID,
const pair<PrivKey, PubKey>& leaderKey,
vector<unsigned char>& messageToCosign) mutable -> bool {
lock_guard<mutex> g(m_mutexPendingFallbackBlock);
return Messenger::SetNodeFallbackBlockAnnouncement(
dst, offset, consensusID, blockNumber, blockHash, leaderID, leaderKey,
*m_pendingFallbackBlock, messageToCosign);
};
//开始对fallback block进行共识
cl->StartConsensus(announcementGeneratorFunc);
return true;
}
bool Node::ComposeFallbackBlock() {
if (LOOKUP_NODE_MODE) {
LOG_GENERAL(WARNING,
"Node::ComputeNewFallbackLeader not expected "
"to be called from LookUp node.");
return true;
}
LOG_MARKER();
LOG_GENERAL(INFO, "Composing new fallback block with consensus Leader ID at "
<< m_consensusLeaderID);
Peer leaderNetworkInfo;
if (m_myShardMembers->at(m_consensusLeaderID).second == Peer()) {
leaderNetworkInfo = m_mediator.m_selfPeer;
} else {
leaderNetworkInfo = m_myShardMembers->at(m_consensusLeaderID).second;
}
LOG_GENERAL(INFO, "m_myShardMembers->at(m_consensusLeaderID).second: "
<< m_myShardMembers->at(m_consensusLeaderID).second);
LOG_GENERAL(INFO, "m_mediator.m_selfPeer: " << m_mediator.m_selfPeer);
LOG_GENERAL(INFO, "LeaderNetworkInfo: " << leaderNetworkInfo);
CommitteeHash committeeHash;
if (!Messenger::GetShardHash(m_mediator.m_ds->m_shards.at(m_myshardId),
committeeHash)) {
LOG_EPOCH(WARNING, to_string(m_mediator.m_currentEpochNum).c_str(),
"Messenger::GetShardHash failed.");
return false;
}
BlockHash prevHash = get<BlockLinkIndex::BLOCKHASH>(
m_mediator.m_blocklinkchain.GetLatestBlockLink());
lock_guard<mutex> g(m_mutexPendingFallbackBlock);
// To-do: Handle exceptions.
m_pendingFallbackBlock.reset(new FallbackBlock(
FallbackBlockHeader(
m_mediator.m_dsBlockChain.GetLastBlock().GetHeader().GetBlockNum() +
1,
m_mediator.m_currentEpochNum, m_fallbackState,
{AccountStore::GetInstance().GetStateRootHash()}, m_consensusLeaderID,
leaderNetworkInfo, m_myShardMembers->at(m_consensusLeaderID).first,
m_myshardId, committeeHash, prevHash),
CoSignatures())); //将组装的fallback block放到Node的m_pendingFallbackBlock
return true;
}
bool Messenger::SetNodeFallbackBlockAnnouncement(
bytes& dst, const unsigned int offset, const uint32_t consensusID,
const uint64_t blockNumber, const bytes& blockHash, const uint16_t leaderID,
const pair<PrivKey, PubKey>& leaderKey, const FallbackBlock& fallbackBlock,
bytes& messageToCosign) {
LOG_MARKER();
ConsensusAnnouncement announcement;
// Set the FallbackBlock announcement parameters
NodeFallbackBlockAnnouncement* fallbackblock =
announcement.mutable_fallbackblock();
SerializableToProtobufByteArray(fallbackBlock,
*fallbackblock->mutable_fallbackblock());
if (!fallbackblock->IsInitialized()) {
LOG_GENERAL(WARNING,
"NodeFallbackBlockAnnouncement initialization failed.");
return false;
}
// Set the common consensus announcement parameters
if (!SetConsensusAnnouncementCore(announcement, consensusID, blockNumber,
blockHash, leaderID, leaderKey)) {
LOG_GENERAL(WARNING, "SetConsensusAnnouncementCore failed.");
return false;
}
// Serialize the part of the announcement that should be co-signed during the
// first round of consensus
messageToCosign.clear();
if (!fallbackBlock.GetHeader().Serialize(messageToCosign, 0)) {
LOG_GENERAL(WARNING, "FallbackBlockHeader serialization failed.");
return false;
}
// Serialize the announcement
return SerializeToArray(announcement, dst, offset);
}
bool ConsensusLeader::StartConsensus(
AnnouncementGeneratorFunc announcementGeneratorFunc, bool useGossipProto) {
LOG_MARKER();
// Initial checks
// ==============
if (!CheckState(SEND_ANNOUNCEMENT)) {
return false;
}
// Assemble announcement message body
// ==================================
bytes announcement_message = {m_classByte, m_insByte,
ConsensusMessageType::ANNOUNCE};//这里就是上面的初始化ConsensusLeader::m_classByte和ConsensusLeader::m_insByte
if (!announcementGeneratorFunc(
announcement_message, MessageOffset::BODY + sizeof(uint8_t),
m_consensusID, m_blockNumber, m_blockHash, m_myID,
make_pair(m_myPrivKey, GetCommitteeMember(m_myID).first),
m_messageToCosign)) {
LOG_GENERAL(WARNING, "Failed to generate announcement message.");
return false;
}
LOG_GENERAL(INFO, "Consensus id is " << m_consensusID
<< " Consensus leader id is " << m_myID);
// Update internal state
// =====================
m_state = ANNOUNCE_DONE;
m_commitRedundantCounter = 0;
m_commitFailureCounter = 0;
// Multicast to all nodes in the committee
// =======================================
if (useGossipProto) {
P2PComm::GetInstance().SpreadRumor(announcement_message);
} else {
std::deque<Peer> peer;
for (auto const& i : m_committee) {
peer.push_back(i.second);
}
P2PComm::GetInstance().SendMessage(peer, announcement_message);
}
if (NUM_CONSENSUS_SUBSETS > 1) {
// Start timer for accepting commits
// =================================
auto func = [this]() -> void {
std::unique_lock<std::mutex> cv_lk(m_mutexAnnounceSubsetConsensus);
m_allCommitsReceived = false;
if (cv_scheduleSubsetConsensus.wait_for(
cv_lk, std::chrono::seconds(COMMIT_WINDOW_IN_SECONDS),
[&] { return m_allCommitsReceived; })) {
LOG_GENERAL(INFO, "Received all commits within the Commit window. !!");
} else {
LOG_GENERAL(
INFO,
"Timeout - Commit window closed. Will process commits received !!");
}
if (m_commitCounter < m_numForConsensus) {
LOG_GENERAL(WARNING,
"Insufficient commits obtained after timeout. Required = "
<< m_numForConsensus
<< " Actual = " << m_commitCounter);
m_state = ERROR;
} else {
LOG_GENERAL(
INFO, "Sufficient commits obtained after timeout. Required = "
<< m_numForConsensus << " Actual = " << m_commitCounter);
lock_guard<mutex> g(m_mutex);
GenerateConsensusSubsets();
StartConsensusSubsets();
}
};
DetachedFunction(1, func);
}
return true;
}
zilliqa DS view change
触发DS view change的两种情况:
1.进行DS Block共识过程中超时
2.进行final Block共识过程中超时