由于nifi cluster的数据分发只能使用Remote ProcessGroup,然后通过site-to-site协议来分发数据到各个nifi instance。所以看了Remote ProcessGroup的源码,现将理解记录下来。
RemoteProcessGroup
org.apache.nifi.groups.RemoteProcessGroup是Remote ProcessGroup的接口,主要函数为:
/**
* Initiates communications between this instance and the remote instance.
*/
void startTransmitting();
/**
* Immediately terminates communications between this instance and the
* remote instance.
*/
void stopTransmitting();
org.apache.nifi.remote.StandardRemoteProcessGroup实现了RemoteProcessGroup接口,在函数startTransmitting()中启动了绑定给它的StandardRemoteGroupPort。
StandardRemoteGroupPort
org.apache.nifi.remote.StandardRemoteGroupPort由RemoteProcessGroup启动,是实现数据传输的主要类,主要对象是SiteToSiteClient,主要函数由onSchedulingStart()和onTrigger()。
public void onSchedulingStart() {
super.onSchedulingStart();
final long penalizationMillis = FormatUtils.getTimeDuration(remoteGroup.getYieldDuration(), TimeUnit.MILLISECONDS);
final SiteToSiteClient client = new SiteToSiteClient.Builder()
.url(remoteGroup.getTargetUri().toString())
.portIdentifier(getIdentifier())
.sslContext(sslContext)
.useCompression(isUseCompression())
.eventReporter(remoteGroup.getEventReporter())
.peerPersistenceFile(getPeerPersistenceFile(getIdentifier(), nifiProperties))
.nodePenalizationPeriod(penalizationMillis, TimeUnit.MILLISECONDS)
.timeout(remoteGroup.getCommunicationsTimeout(TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS)
.transportProtocol(remoteGroup.getTransportProtocol())
.httpProxy(new HttpProxy(remoteGroup.getProxyHost(), remoteGroup.getProxyPort(), remoteGroup.getProxyUser(), remoteGroup.getProxyPassword()))
.build();
clientRef.set(client);
}
onSchedulingStart()函数创建SiteToSiteClient,这是site-to-site协议的主要client接口。
@Override
public void onTrigger(final ProcessContext context, final ProcessSession session) {
...
final SiteToSiteClient client = getSiteToSiteClient();
final Transaction transaction;
transaction = client.createTransaction(transferDirection);
...
transferFlowFiles(transaction, context, session, firstFlowFile);
....
}
}
onTrigger()函数用来传输数据到另外的nifi instance,此处用到了Transaction来传输数据。
SiteToSiteClient
===========
SiteToSiteClient是site-to-site协议的接口,分为两种实现SocketClient和HttpClient两种,具体对应Remote ProcessGroup界面配置中的两种协议,这里主要讨论SocketClient的实现。
org.apache.nifi.remote.client.socket.SocketClient通过createTransaction()函数来创建SocketClientTransaction。
@Override
public Transaction createTransaction(final TransferDirection direction) throws IOException {
...
final EndpointConnection connectionState = pool.getEndpointConnection(direction, getConfig());
if (connectionState == null) {
return null;
}
final Transaction transaction;
try {
transaction = connectionState.getSocketClientProtocol().startTransaction(
connectionState.getPeer(), connectionState.getCodec(), direction);
} catch (final Throwable t) {
pool.terminate(connectionState);
throw new IOException("Unable to create Transaction to communicate with " + connectionState.getPeer(), t);
}
...
}
EndpointConnectionPool
此处的pool为EndpointConnectionPool,它是一个EndpointConnection池,保存了EndpointConnection的map。
private final ConcurrentMap<PeerDescription, BlockingQueue<EndpointConnection>> connectionQueueMap = new ConcurrentHashMap<>();
另外,它保存了一个远端链接信息PeerStatus的选择器PeerSelector.
private final PeerSelector peerSelector;
EndpointConnection
EndpointConnection就是实际上与远端nifi instance的链接,它保存了与远端的数据通道Peer和handshake工具对象SocketClientProtocol。
PeerSelector
它提供了获得下一个PeerStatus函数getNextPeerStatus()跟新链接信息的函数refreshPeers()。
/**
* Return status of a peer that will be used for the next communication.
* The peer with less workload will be selected with higher probability.
* @param direction the amount of workload is calculated based on transaction direction,
* for SEND, a peer with less flow files is preferred,
* for RECEIVE, a peer with more flow files is preferred
* @return a selected peer, if there is no available peer or all peers are penalized, then return null
*/
public PeerStatus getNextPeerStatus(final TransferDirection direction) {
List<PeerStatus> peerList = peerStatuses;
if (isPeerRefreshNeeded(peerList)) {
peerRefreshLock.lock();
try {
// now that we have the lock, check again that we need to refresh (because another thread
// could have been refreshing while we were waiting for the lock).
peerList = peerStatuses;
if (isPeerRefreshNeeded(peerList)) {
try {
peerList = createPeerStatusList(direction);
} catch (final Exception e) {
final String message = String.format("%s Failed to update list of peers due to %s", this, e.toString());
warn(logger, eventReporter, message);
if (logger.isDebugEnabled()) {
logger.warn("", e);
}
}
this.peerStatuses = peerList;
peerRefreshTime = systemTime.currentTimeMillis();
}
} finally {
peerRefreshLock.unlock();
}
}
if (peerList == null || peerList.isEmpty()) {
return null;
}
}
这里的createPeerStatusList会根据远端nifi instance上端口接收或者发送的flowfiles的数量和direction对peer排序。当是在发送数据的时候,选取目前需要接收的数据最少的nifi instance对应的peer来发送数据。当是在接收数据时,选取目前需要发送的数据量最大的nifi instance对应的peer来接收数据。通过这个机制来达到cluster的负载均衡。
private Set<PeerStatus> fetchRemotePeerStatuses() throws IOException {
final Set<PeerDescription> peersToRequestClusterInfoFrom = new HashSet<>();
// Look at all of the peers that we fetched last time.
final Set<PeerStatus> lastFetched = lastFetchedQueryablePeers;
if (lastFetched != null && !lastFetched.isEmpty()) {
lastFetched.stream().map(peer -> peer.getPeerDescription())
.forEach(desc -> peersToRequestClusterInfoFrom.add(desc));
}
// Always add the configured node info to the list of peers to communicate with
peersToRequestClusterInfoFrom.add(peerStatusProvider.getBootstrapPeerDescription());
logger.debug("Fetching remote peer statuses from: {}", peersToRequestClusterInfoFrom);
Exception lastFailure = null;
for (final PeerDescription peerDescription : peersToRequestClusterInfoFrom) {
try {
final Set<PeerStatus> statuses = peerStatusProvider.fetchRemotePeerStatuses(peerDescription);
lastFetchedQueryablePeers = statuses.stream()
.filter(p -> p.isQueryForPeers())
.collect(Collectors.toSet());
return statuses;
} catch (final Exception e) {
logger.warn("Could not communicate with {}:{} to determine which nodes exist in the remote NiFi cluster, due to {}",
peerDescription.getHostname(), peerDescription.getPort(), e.toString());
lastFailure = e;
}
}
final IOException ioe = new IOException("Unable to communicate with remote NiFi cluster in order to determine which nodes exist in the remote cluster");
if (lastFailure != null) {
ioe.addSuppressed(lastFailure);
}
throw ioe;
}
refreshPeers函数调用了fetchRemotePeerStatuses()函数,这个函数中对于每一个之前记录过的peer都会去访问以跟新整个集群状态,所以只要第一次启动的时候配置的url nifi instance是有效的,以后就算它挂了,理论上也是不影响其他节点数据的正常发送和接收的。
而且refreshPeers()函数是在EndpointConnectionPool构造的时候,去不断调用的,所以site-to-site协议会不断的跟新cluster中节点的状态,保证数据的可靠性。
taskExecutor.scheduleWithFixedDelay(new Runnable() {
@Override
public void run() {
peerSelector.refreshPeers();
}
}, 0, 5, TimeUnit.SECONDS);