Zilliqa代码分析

关闭特定的warning

在使用一些第三方库或源码的时候,经常会遇到编译时产生warnings情况,这些warning不是我们自己的代码产生的,当然也不好去修改,但每次编译都显示一大堆与自己代码无关的警告也着实看着不爽,更麻烦的是还有可能造成自己代码中产生的警告被淹没在多过的无关警告中,而被忽略掉的情况。
所以要想办法关闭这些第三方代码和库产生的警告。
关闭特定的warning可以在编译时通过命令行参数的方式指定,比如 gcc 是在命令行一般是用-Wno-xxxx这样的形式禁止特定的warning,这里xxxx代入特定的警告名。但这种方式相当将所有代码产生的这个warning显示都关闭了,不管是第三方库产生的还是自己的代码产生的,所以这种用法并不适合。
关闭特定的warning还可以在代码中通过添加#pragma指令来实现,用#pragma指令可以对指定的区域的代码关闭指定的warning。
msvc下的用法是这样的

#ifdef _MSC_VER
// 关闭编译CImg.h时产生的警告
#pragma  warning( push ) 
#pragma  warning( disable: 4267 4319 )
#endif
#include "CImg.h"
#ifdef _MSC_VER
#pragma  warning(  pop  ) 
#endif

gcc下的用法是这样的:

#ifdef __GNUC__
// 关闭 using  _Base::_Base; 这行代码产生的警告
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Winherited-variadic-ctor"
#endif
.....
namespace cimg_library {
template<typename T>
class CImgWrapper:public CImg<T> {
public:
    using   _Base =CImg<T>;
    using  _Base::_Base; // 继承基类构造函数
    ......
}
} /* namespace cimg_library */
#ifdef __GNUC__
#pragma GCC diagnostic pop
#endif

参考资料:
https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
https://gcc.gnu.org/onlinedocs/gcc/Diagnostic-Pragmas.html#Diagnostic-Pragmas

zilliqa启动

运行build/tests/Node/pre_run.sh
python tests/Zilliqa/test_zilliqa_local.py stop
python tests/Zilliqa/test_zilliqa_local.py setup 20
python tests/Zilliqa/test_zilliqa_local.py prestart 10

# clean up persistence storage
rm -rf lookup_local_run/node*

python tests/Zilliqa/test_zilliqa_lookup.py setup 1
  1. 在stop时候
os.system('fuser -k ' + str(NODE_LISTEN_PORT + x) + '/tcp')

fuser -k 杀掉访问文件的进程。

  1. setup 20
  • 在build/local_run下面创建node_000x命名的20个目录,并将build/tests/Zilliqa/zilliqa和build/tests/Zilliqa/sendcmd拷贝到build/local_run目录下
  1. prestart 10
  • 调用build/tests/Zilliqa/genkeypair生成20组keypair
  • 将上面生成的keypair写入到build/local_run/keys.txt中
  • 利用前10组的keypair生成build/dsnodes.xml
  • 利用前10组的keypair,ip,port生成build/config_normal.xml
  • 利用前20组的keypair,ip,port生成build/ds_whitelist.xml
  • 利用前20组的keypair的keypair[0]生成build/build/shard_whitelist.xml
  1. test_zilliqa_lookup.py setup 1(对1个look_up节点进行配置)
  • 将build/tests/Zilliqa/zilliqa拷贝到build/lookup_local_run/node_0001/lzilliqa
  • 为look_up生成keypair
  • 将build/constants_local.xml(该文件拷贝自constants_local.xml)中的node/look_ups下的所有peer/pubkey赋值为上面生成的keypair[0]
  • 将上面生成的keypairs写入到build/lookup_local_run/keys.txt
  • 使用上面生成的keypairs的keypair[0],ip, port生成build/tests/config_lookup.xml
运行build/tests/Node/test_node_lookup.sh
  1. test_zilliqa_lookup.py start
  • 将build/config_lookup.xml拷贝到build/lookup_local_run/node_0001/config.xml
  • 将build/constants_local.xml拷贝到build/lookup_local_run/node_0001/constants.xml
  • 将build/dsnodes.xml拷贝到build/lookup_local_run/node_0001/dsnodes.xml
  • 执行
cd build/lookup_local_run/node_0001; echo keypair[0] keypair[1] > mykey.txt; ./lzilliqa keypair[1] keypair[0] 127.0.0.1 4001 0 0 0 > ./error_log_zilliqa 2>&1 &

启动look_up节点。

运行build/tests/Node/test_node_simple.sh
  1. test_zilliqa_local.py start 10(这里的10是DS的节点数量,该命令的执行会运行build/local_run目录下前10个目录作为DS节点运行,其余作为分片节点运行。)
  • 将build/ds_whitelist.xml拷贝到build/local_run/node_000x/ds_whitelist.xml,拷贝build/shard_whitelist.xml到build/local_run/node_000x/shard_whitelist.xml,拷贝build/constants_local.xml到build/local_run/node_000x/constants.xml,拷贝build/dsnodes.xml到build/local_run/node_000x/dsnodes.xml。
  • 对于前10个DS节点来说,拷贝build/config_normal.xml到build/local_run/node_000x/config.xml。执行
cd build/local_run/node_000x; echo keypair[0] keypair[1] > mykey.txt; ./zilliqa keypair[1] keypair[0] 127.0.0.1 500x 1 0 0 > ./error_log_zilliqa 2>&1 &
  • 对于分片几点直接执行
cd build/local_run/node_000x; echo keypair[0] keypair[1] > mykey.txt; ./zilliqa keypair[1] keypair[0] 127.0.0.1 500x 0 0 0 > ./error_log_zilliqa 2>&1 &

上面会将10个DS几点和10个分片节点都运行起来

  1. 对DS节点执行
tests/Zilliqa/test_zilliqa_local.py sendcmd $ds 01000000000000000000000000000100007F00001389
  • 运行
build/tests/Zilliqa/sendcmd 500x cmd 01000000000000000000000000000100007F00001389
  1. 分片节点运行
tests/Zilliqa/test_zilliqa_local.py startpow $node 10 0000000000000001 05 03 2b740d75891749f94b6a8ec09f086889066608e4418eda656c93443e8310750a e8cc9106f8a28671d91e2de07b57b828934481fadf6956563b963bb8e5c266bf
  • $node为分片节点号,10位DS节点数量,0000000000000001为blocknum,05 03分别为DS节点的难度值和分片节点的难度值。后面两个为两个随机数。
"{0:0{1}x}".format(NODE_LISTEN_PORT, 8)

上面中的第一个0表示NODE_LISTEN_PORT,第二个0表示填充字符0,{1}表示的后面的8,x表示转换成十六进制。

  • 对每个分片节点执行
build/tests/Zilliqa/sendcmd 500x cmd 0200 0000000000000001 05 03 2b740d75891749f94b6a8ec09f086889066608e4418eda656c93443e8310750a e8cc9106f8a28671d91e2de07b57b828934481fadf6956563b963bb8e5c266bf DS[0].keypair[0] 0000000000000000000000000100007F {0:0{1}x}".format(NODE_LISTEN_PORT + x, 8) ... DS[9].keypair[0] 0000000000000000000000000100007F {0:0{1}x}".format(NODE_LISTEN_PORT + x, 8)

上面的DS[0].keypair[0]表示0号DS节点的keypair[0],...表示从中间8个DS节点其格式都为DS[9].keypair[0] 0000000000000000000000000100007F {0:0{1}x}".format(NODE_LISTEN_PORT + x, 8)。表示分片节点进行pow并将自己的pow结果发送给所有DS节点

zilliqa fallbackblock

Fallback Mechanism
We have introduced a fallback mechanism to improve the robustness of the network and ensure that it always makes progress in spite of unforeseen stalls. When the entire system enters a stall for a significant period of time, the first shard in the system is tasked with reviving the network by assuming the responsibilities of the Directory Service (DS) committee. If, after this, the system still fails to make progress, the second shard proceeds to take over. This fallback mechanism is actualized by the candidate fallback shard performing consensus over a new block type — the Fallback Block — designed for this mechanism. Once consensus is achieved, the block is broadcasted to the rest of the network, and a new round of proof-of-work (PoW) submissions begins thereafter.

上面是zilliqablog的解释。是为了增强网络的鲁棒性,在DS失败时候由分片承担起DS服务来恢复整个系统。
fallback block机制简单总结:调用Node::FallbackTimerLaunch()设置定时器,如果定时时间超过本分片节点等待的时间,调用RunConsensusOnFallback()进行共识,其他分片节点进入WAITING_FALLBACKBLOCK状态。RunConsensusOnFallback()根据节点是leader还是backup分别执行RunConsensusOnFallbackWhenLeader()RunConsensusOnFallbackWhenBackup(),并将自己的状态设置为FALLBACK_CONSENSUS状态。
RunConsensusOnFallbackWhenLeader()调用ComposeFallbackBlock()来组装FallbackBlock到NODE::m_pendingFallbackBlock中,创建共识对象到NODE::m_consensusObject中,其中m_classByte=NODE; m_insByte=FALLBACKCONSENSUS,在StartConsensus中生成对fallback block共识的message时会赋值到message的前两个字节中,接受者会根据insByte选择相应的处理程序进行处理。然后将fallback block的共识消息发送给本分片的所有节点(包括自己)。
RunConsensusOnFallbackWhenBackup()根据当前的链的信息创建共识对象到NODE::m_consensusObject中,为在接收到fallback block的共识消息前做准备工作。
分片节点(包含leader节点)接受到leader的fallback block共识message后,根据message的第二个字节ins_byte来调用相应的处理函数,这里调用的是NODE::ProcessFallbackConsensus()。在NODE::ProcessFallbackConsensus()中调用m_consensusObject->ProcessMessage(message, offset, from)进行共识注意处理消息有两种:ConsensusLeader::ProcessMessage(message, offset, from)ConsensusBackup::ProcessMessage(message, offset, from),共识结束后两轮的多重签名结果在m_consensusObject中,fallback block存在backup进行共识的分片所有节点的m_consensusObject中。如果共识成功调用Node::ProcessFallbackConsensusWhenDone()来完成多重签名验证(其中第一轮的签名结果也会作为第二轮签名的签名内容),根据当前分片结构和fallback block创建FallbackBlockWShardingStructure并存储到m_fallbackBlockDB中。更新本地节点的m_mediator.m_DSCommitteem_mediator.m_ds.m_mode。调用m_mediator.m_ds->StartNewDSEpochConsensus(true)等待POW提交和运行DS块的共识。生成fallback block消息并发送给其他分片节点,这里的ins_byte为FALLBACKBLOCK对应的消息接受者的处理函数为Node::ProcessFallbackBlock
Node::ProcessFallbackBlock经过一通验证,更新本地的DSCommittee信息,重新计时fallback block等待时间,并转发fallback block信息到其他分片节点,开始进行POW。

void Node::FallbackTimerLaunch() {
  if (m_fallbackTimerLaunched) {
    return;    //加载FallbackTimer,如果已经加载了则退出
  }

  if (!ENABLE_FALLBACK) {
    LOG_GENERAL(INFO, "Fallback is currently disabled");
    return;
  }

  LOG_MARKER();

  if (FALLBACK_INTERVAL_STARTED < FALLBACK_CHECK_INTERVAL ||
      FALLBACK_INTERVAL_WAITING < FALLBACK_CHECK_INTERVAL) {
    LOG_GENERAL(FATAL,
                "The configured fallback checking interval must be "
                "smaller than the timeout value.");
    return;
  }

  m_runFallback = true;
  m_fallbackTimer = 0;
  m_fallbackStarted = false;

  auto func = [this]() -> void {
    while (m_runFallback) {
      this_thread::sleep_for(chrono::seconds(FALLBACK_CHECK_INTERVAL));

      if (m_mediator.m_ds->m_mode != DirectoryService::IDLE) {//如果DS的模式不是空闲状态说明DS没有问题,则直接返回。
        m_fallbackTimerLaunched = false;
        return;
      }

      lock_guard<mutex> g(m_mutexFallbackTimer);
      /* 是否有个问题就是当某个分片fallback block共识失败后将一直执行第一个if */
      if (m_fallbackStarted) {
        if (LOOKUP_NODE_MODE) {
          LOG_GENERAL(WARNING,
                      "Node::FallbackTimerLaunch when started is "
                      "true not expected to be called from "
                      "LookUp node.");
          return;
        }
        /* 
         * 由于下面的else中将m_fallbackTimer = 0;所以从新计时
         * 如果计时超出FALLBACK_INTERVAL_STARTED则更换本分片的leader
         * 然后从新RunConsensusOnFallback()对fallback block进行共识。
         */
        if (m_fallbackTimer >= FALLBACK_INTERVAL_STARTED) {
          UpdateFallbackConsensusLeader();

          auto func = [this]() -> void { RunConsensusOnFallback(); };
          DetachedFunction(1, func);
          /* 重新计时 */
          m_fallbackTimer = 0;
        }
      } else {
        bool runConsensus = false;
        
        if (!LOOKUP_NODE_MODE) {
          /* 如果m_fallbackTimer大于本分片应该等待的时间则由本分片担负起DS的职责来恢复整个网络 */
          /* 
           * 假如计时时间超出了第一个分片节点需要等待的时间,第一个分片节点的会执行第一个if判断。
           * 由于第一个if中设置了runConsensus = true;不会执行第二个if。
           * 第一个if中设置了m_fallbackStarted = true;因此下次循环会进入上边的if (m_fallbackStarted) 
           */
          if (m_fallbackTimer >=
              (FALLBACK_INTERVAL_WAITING * (m_myshardId + 1))) {
            /* 开始FallBack block的共识,在共识过程中如果是分片的leader的话则来生成fallback block */
            auto func = [this]() -> void { RunConsensusOnFallback(); };
            DetachedFunction(1, func);
            /* 设置m_fallbackStarted说明开始了fallback的处理 */
            m_fallbackStarted = true;
            runConsensus = true;
            m_fallbackTimer = 0;
            m_justDidFallback = true;
          }
        }

        if (m_fallbackTimer >= FALLBACK_INTERVAL_WAITING &&
            m_state != WAITING_FALLBACKBLOCK &&
            m_state != FALLBACK_CONSENSUS_PREP &&
            m_state != FALLBACK_CONSENSUS && !runConsensus) {
          SetState(WAITING_FALLBACKBLOCK);
          m_justDidFallback = true;
          cv_fallbackBlock.notify_all();
        }
      }

      m_fallbackTimer += FALLBACK_CHECK_INTERVAL;
    }
  };

  DetachedFunction(1, func);
  m_fallbackTimerLaunched = true;
}
void Node::RunConsensusOnFallback() {
  if (LOOKUP_NODE_MODE) {
    LOG_GENERAL(WARNING,
                "DirectoryService::RunConsensusOnFallback not expected "
                "to be called from LookUp node.");
    return;
  }

  LOG_MARKER();

  SetLastKnownGoodState();   //将发生fallback前的状态保存到m_fallbackState中
  SetState(FALLBACK_CONSENSUS_PREP);   //然后将m_state设置为FALLBACK_CONSENSUS_PREP

  // Upon consensus object creation failure, one should not return from the
  // function, but rather wait for fallback.
  bool ConsensusObjCreation = true;

  if (m_isPrimary) {    //如果是分片中的leader节点则在RunConsensusOnFallbackWhenLeader中通过ComposeFallbackBlock来构造fallback block
    ConsensusObjCreation = RunConsensusOnFallbackWhenLeader();
    if (!ConsensusObjCreation) {
      LOG_GENERAL(WARNING, "Error after RunConsensusOnFallbackWhenShardLeader");
    }
  } else {
    ConsensusObjCreation = RunConsensusOnFallbackWhenBackup();
    if (!ConsensusObjCreation) {
      LOG_GENERAL(WARNING, "Error after RunConsensusOnFallbackWhenShardBackup");
    }
  }

  if (ConsensusObjCreation) {
    SetState(FALLBACK_CONSENSUS);
    cv_fallbackConsensusObj.notify_all();
  }
}
bool Node::RunConsensusOnFallbackWhenLeader() {
  if (LOOKUP_NODE_MODE) {
    LOG_GENERAL(WARNING,
                "Node::"
                "RunConsensusOnFallbackWhenLeader not expected "
                "to be called from LookUp node.");
    return true;
  }

  LOG_MARKER();

  LOG_EPOCH(INFO, to_string(m_mediator.m_currentEpochNum).c_str(),
            "I am the fallback leader node. Announcing to the rest.");

  {
    lock_guard<mutex> g(m_mutexShardMember);

    if (!ComposeFallbackBlock()) {    //组装fallback block
      LOG_EPOCH(WARNING, to_string(m_mediator.m_currentEpochNum).c_str(),
                "Node::RunConsensusOnFallbackWhenLeader failed.");
      return false;
    }

    // Create new consensus object
    m_consensusBlockHash =
        m_mediator.m_txBlockChain.GetLastBlock().GetBlockHash().asBytes();
    //创建共识对象用于共识,ConsensusLeader用于leader进行创建共识对象
    m_consensusObject.reset(new ConsensusLeader(
        m_mediator.m_consensusID, m_mediator.m_currentEpochNum,
        m_consensusBlockHash, m_consensusMyID, m_mediator.m_selfKey.first,
        *m_myShardMembers, static_cast<unsigned char>(NODE),//这里就是初始化的ConsensusLeader::m_classByte
        static_cast<unsigned char>(FALLBACKCONSENSUS),//这里初始化ConsensusLeader::m_insByte
        NodeCommitFailureHandlerFunc(), ShardCommitFailureHandlerFunc()));
  }

  if (m_consensusObject == nullptr) {
    LOG_EPOCH(WARNING, to_string(m_mediator.m_currentEpochNum).c_str(),
              "Error: Unable to create consensus leader object");
    return false;
  }

  ConsensusLeader* cl = dynamic_cast<ConsensusLeader*>(m_consensusObject.get());

  vector<unsigned char> m;
  {
    lock_guard<mutex> g(m_mutexPendingFallbackBlock);
    //将ComposeFallbackBlock创建的m_pendingFallbackBlock即fallback block序列化到m中
    m_pendingFallbackBlock->Serialize(m, 0);
  }

  std::this_thread::sleep_for(std::chrono::seconds(FALLBACK_EXTRA_TIME));

  auto announcementGeneratorFunc =
      [this](vector<unsigned char>& dst, unsigned int offset,
             const uint32_t consensusID, const uint64_t blockNumber,
             const vector<unsigned char>& blockHash, const uint16_t leaderID,
             const pair<PrivKey, PubKey>& leaderKey,
             vector<unsigned char>& messageToCosign) mutable -> bool {
    lock_guard<mutex> g(m_mutexPendingFallbackBlock);
    return Messenger::SetNodeFallbackBlockAnnouncement(
        dst, offset, consensusID, blockNumber, blockHash, leaderID, leaderKey,
        *m_pendingFallbackBlock, messageToCosign);
  };
  //开始对fallback block进行共识
  cl->StartConsensus(announcementGeneratorFunc);

  return true;
}
bool Node::ComposeFallbackBlock() {
  if (LOOKUP_NODE_MODE) {
    LOG_GENERAL(WARNING,
                "Node::ComputeNewFallbackLeader not expected "
                "to be called from LookUp node.");
    return true;
  }

  LOG_MARKER();

  LOG_GENERAL(INFO, "Composing new fallback block with consensus Leader ID at "
                        << m_consensusLeaderID);

  Peer leaderNetworkInfo;
  if (m_myShardMembers->at(m_consensusLeaderID).second == Peer()) {
    leaderNetworkInfo = m_mediator.m_selfPeer;
  } else {
    leaderNetworkInfo = m_myShardMembers->at(m_consensusLeaderID).second;
  }
  LOG_GENERAL(INFO, "m_myShardMembers->at(m_consensusLeaderID).second: "
                        << m_myShardMembers->at(m_consensusLeaderID).second);

  LOG_GENERAL(INFO, "m_mediator.m_selfPeer: " << m_mediator.m_selfPeer);
  LOG_GENERAL(INFO, "LeaderNetworkInfo: " << leaderNetworkInfo);

  CommitteeHash committeeHash;
  if (!Messenger::GetShardHash(m_mediator.m_ds->m_shards.at(m_myshardId),
                               committeeHash)) {
    LOG_EPOCH(WARNING, to_string(m_mediator.m_currentEpochNum).c_str(),
              "Messenger::GetShardHash failed.");
    return false;
  }

  BlockHash prevHash = get<BlockLinkIndex::BLOCKHASH>(
      m_mediator.m_blocklinkchain.GetLatestBlockLink());

  lock_guard<mutex> g(m_mutexPendingFallbackBlock);

  // To-do: Handle exceptions.
  m_pendingFallbackBlock.reset(new FallbackBlock(
      FallbackBlockHeader(
          m_mediator.m_dsBlockChain.GetLastBlock().GetHeader().GetBlockNum() +
              1,
          m_mediator.m_currentEpochNum, m_fallbackState,
          {AccountStore::GetInstance().GetStateRootHash()}, m_consensusLeaderID,
          leaderNetworkInfo, m_myShardMembers->at(m_consensusLeaderID).first,
          m_myshardId, committeeHash, prevHash),
      CoSignatures()));  //将组装的fallback block放到Node的m_pendingFallbackBlock

  return true;
}
bool Messenger::SetNodeFallbackBlockAnnouncement(
    bytes& dst, const unsigned int offset, const uint32_t consensusID,
    const uint64_t blockNumber, const bytes& blockHash, const uint16_t leaderID,
    const pair<PrivKey, PubKey>& leaderKey, const FallbackBlock& fallbackBlock,
    bytes& messageToCosign) {
  LOG_MARKER();

  ConsensusAnnouncement announcement;

  // Set the FallbackBlock announcement parameters

  NodeFallbackBlockAnnouncement* fallbackblock =
      announcement.mutable_fallbackblock();
  SerializableToProtobufByteArray(fallbackBlock,
                                  *fallbackblock->mutable_fallbackblock());

  if (!fallbackblock->IsInitialized()) {
    LOG_GENERAL(WARNING,
                "NodeFallbackBlockAnnouncement initialization failed.");
    return false;
  }

  // Set the common consensus announcement parameters

  if (!SetConsensusAnnouncementCore(announcement, consensusID, blockNumber,
                                    blockHash, leaderID, leaderKey)) {
    LOG_GENERAL(WARNING, "SetConsensusAnnouncementCore failed.");
    return false;
  }

  // Serialize the part of the announcement that should be co-signed during the
  // first round of consensus

  messageToCosign.clear();
  if (!fallbackBlock.GetHeader().Serialize(messageToCosign, 0)) {
    LOG_GENERAL(WARNING, "FallbackBlockHeader serialization failed.");
    return false;
  }

  // Serialize the announcement

  return SerializeToArray(announcement, dst, offset);
}
bool ConsensusLeader::StartConsensus(
    AnnouncementGeneratorFunc announcementGeneratorFunc, bool useGossipProto) {
  LOG_MARKER();

  // Initial checks
  // ==============

  if (!CheckState(SEND_ANNOUNCEMENT)) {
    return false;
  }

  // Assemble announcement message body
  // ==================================
  bytes announcement_message = {m_classByte, m_insByte,
                                ConsensusMessageType::ANNOUNCE};//这里就是上面的初始化ConsensusLeader::m_classByte和ConsensusLeader::m_insByte

  if (!announcementGeneratorFunc(
          announcement_message, MessageOffset::BODY + sizeof(uint8_t),
          m_consensusID, m_blockNumber, m_blockHash, m_myID,
          make_pair(m_myPrivKey, GetCommitteeMember(m_myID).first),
          m_messageToCosign)) {
    LOG_GENERAL(WARNING, "Failed to generate announcement message.");
    return false;
  }

  LOG_GENERAL(INFO, "Consensus id is " << m_consensusID
                                       << " Consensus leader id is " << m_myID);

  // Update internal state
  // =====================

  m_state = ANNOUNCE_DONE;
  m_commitRedundantCounter = 0;
  m_commitFailureCounter = 0;

  // Multicast to all nodes in the committee
  // =======================================

  if (useGossipProto) {
    P2PComm::GetInstance().SpreadRumor(announcement_message);
  } else {
    std::deque<Peer> peer;

    for (auto const& i : m_committee) {
      peer.push_back(i.second);
    }

    P2PComm::GetInstance().SendMessage(peer, announcement_message);
  }

  if (NUM_CONSENSUS_SUBSETS > 1) {
    // Start timer for accepting commits
    // =================================
    auto func = [this]() -> void {
      std::unique_lock<std::mutex> cv_lk(m_mutexAnnounceSubsetConsensus);
      m_allCommitsReceived = false;
      if (cv_scheduleSubsetConsensus.wait_for(
              cv_lk, std::chrono::seconds(COMMIT_WINDOW_IN_SECONDS),
              [&] { return m_allCommitsReceived; })) {
        LOG_GENERAL(INFO, "Received all commits within the Commit window. !!");
      } else {
        LOG_GENERAL(
            INFO,
            "Timeout - Commit window closed. Will process commits received !!");
      }

      if (m_commitCounter < m_numForConsensus) {
        LOG_GENERAL(WARNING,
                    "Insufficient commits obtained after timeout. Required = "
                        << m_numForConsensus
                        << " Actual = " << m_commitCounter);
        m_state = ERROR;
      } else {
        LOG_GENERAL(
            INFO, "Sufficient commits obtained after timeout. Required = "
                      << m_numForConsensus << " Actual = " << m_commitCounter);
        lock_guard<mutex> g(m_mutex);
        GenerateConsensusSubsets();
        StartConsensusSubsets();
      }
    };
    DetachedFunction(1, func);
  }

  return true;
}

zilliqa DS view change

触发DS view change的两种情况:
1.进行DS Block共识过程中超时
2.进行final Block共识过程中超时

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 206,214评论 6 481
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 88,307评论 2 382
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 152,543评论 0 341
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 55,221评论 1 279
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 64,224评论 5 371
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,007评论 1 284
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,313评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,956评论 0 259
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 43,441评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,925评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,018评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,685评论 4 322
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,234评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,240评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,464评论 1 261
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,467评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,762评论 2 345

推荐阅读更多精彩内容