1. 为什么要用MQ? MQ带来了什么好处? 带来了什么坏处?
-
为什么要用MQ?
MQ(message queue) 消息队列. 最早的应用都是单体架构设计的, 随着业务的发展, 单体架构不足以满足的时候, 服务化改造就出现了. 服务化改造出现带来了一个问题就是服务之间的通信. 同步的通信代表就是RPC框架, 如Dubbo. 而异步通信代表就是MQ.
-
MQ带来的好处?
MQ带来的好处主要有三个 : 异步, 解耦, 削峰.
在我看来, 解耦, 削峰都是异步带来的产物.
解耦是因为增加了MQ这样一个中间层, 服务的上游无需关注下游的状态, 下游也无需关注上游的状态.
削峰是在业务遇到突发的高峰时, 大量的请求涌入, 通过MQ这种方式就可以先把请求积压下来, 然后对下游消费方可以削掉这个峰值的流量, 下游可以允许的pull消息来消费.
异步是请求过来,写入消息队列中, 就可以返回了. 后续下游通过消息队列中消息来做消费即可.
-
MQ带来的坏处?
增加了一个MQ作为中间层, 带来了以上这么多好处但是同时也有坏处.
系统的可用性降低. 增加一个组件的同时, 就需要保证这个组件的可用性. 需要考虑在MQ不可用的时候, 整个服务会不会变得不可用, 甚至整个服务链路是否变得不可用.
系统的复杂性增高. 如何保证消息的幂等, 如何保证消息不丢失, 如何保证消息的顺序型都是MQ带来的问题.
一致性问题. 如何确保消费真实的被消费到.
2. 市面上常见的MQ有哪些, 性能对比.
RabbitMQ, Kafka, rocketMQ等数据可以百度查到.
QMQ的开发者又谈到QMQ的性能和rocket的性能处于同一个数量级.
3. QMQ的特点是什么?
4. QMQ的存储模型?
Kafka和RocketMQ都是基于partition的存储模型, 也就是每一个subject分成一个或者多个partition, 同时consumer消费的时候也是和partition一一对应的.
如下 :
在这种设计的模式下, 如果consumer数目大于partition的数据, 就会出现consumer处于空闲的状态.
如果partition数据大于consumer的数据就会出现部分consumer繁忙的状况.
以上是用基于partition去做负载均衡所带来的问题. 由于这种静态绑定的关系, 如果遇到了消费速度更不上消费的速度, 单单的增加consumer是不够的. 需要增加partition. 尤其是在kafka里, partition是一个比较重的资源, 增加太多的partition还需要考虑集群的处理能力; 同时当高峰期过了之后, 如果想缩容consumer也是比较麻烦的, 因为partition只能增加, 不能减少.
上述设计, 同时带了另一个问题, 就是如果有消息积压, 我们增加partition也是没有用的, 因为消费已经挤压到已存在的partition中, 新增partition只能够消费新分配过来的数据.
以上是QMQ的存储模型, 方框上方的数字代表该方框自己在log中的偏移量, 方框中的数据代表该项的内容. 如何message log上方的3,6,9表示这几条消息在message log中的偏移量. 而consume log中方框内的数据3,6,9,20对应着message log的偏移, 表示这几条消息都在topic1中, consume log 方框上方的1,2,3,4代表这几个方框在consume log中的逻辑偏移. 下面的pull log 方框中的1,2,3,4对应着consume log的逻辑偏移, 而pull log方框外的数字表示pull log的逻辑偏移.
message log 是所有消息的主存储体, 所有topic的消息都进入该log.
consume log 存储的是message log的索引.
pull log 每个consumer拉取消息的时候会产生pull log, pull log 记录的是拉取消息在consume log中的sequence.
这个时候消费者就可以使用pull log上的sequence来表示消费的进度, 这样一来我们就解耦了consumer和partition之间的耦合关系. 两者可以任意扩展.
5. QMQ是如何保证高可用的?
分片 + 复制.
分片
QMQ不是基于partition的, 可以通过增加更多的机器提高一个topic的可用性. 消息按照一定的负载均衡策略, 分配到不同的机器上, 某台机器离线之后, producer将不再将消息发送到server.
复制
QMQ通过主从复制来提高单机高可用. QMQ将服务器划分为过个group, 每一个group都包含多个master和slave, 消息的发送和消费全部指向master, slave只保证可用性.
6. QMQ是如何保证幂等的?
Exactly once 消费
一般的消息分为At Most Once, At Least Once, Exactly once. 而最后一种属于我们最期望的一种模型, 同时这种模型的实现也不容易. 由于网络和应用依赖的复杂性, Exactly once基本不可行, 但是我们可以通过幂等处理来实现最终的Exactly once.
什么时候会出现重复消费
发消息的时候, 网络抖动, 导致发送超时, 但是实际上server已经成功收到消息, 只是server的ACK回到Producer的时候超时了. 这个时候Producer端为了确保不丢失往往会重试, 就会导致消息发送多次.
consumer在收到消息,进行业务处理, 业务处理的过程中需要有外部依赖, 比如调用一个HTTP的接口, 这种情况也会有实际成功但是结果超时的情况, 这个时候会重发消息.
-
consumer收到消息处理成果后, 返回ack给server的时候由于网络等原因导致ack丢失, 也会导致消息重复消费.
QMQ怎么保证幂等.
基于DB的幂等处理器. 通过数据库事务保证业务和去重是原子操作.
基于Redis的幂等处理器.
8. 延时消息和hash wheel timer算法.
QMQ中用到的HashWheelTimer是采用Netty得HashWheelTimer实现的。
如上面的这个图, 假设时间轮大小为8(这个轮子底层用了一个数组实现的) 1s转动一格, 每一格指向一个链表, 这个链表保存着待执行的任务(TimeOutTask).
假设当前位于2位置, 要添加一个3s后的任务,则2+3=5, 在第五格的链表中添加一个节点指向任务即可, 标识为round=0.
假设当前位于2位置, 要添加一个10s后的任务, (2 + 10) % 8 = 4 , 则在第4格添加一个节点指向任务, 并标识round=1, 则当时间轮第二次经过第4格的时候, 会执行任务.
时间轮只会执行round=0的任务, 并会把格子上的其他任务的round减1.
public class HashedWheelTimer implements Timer {
static final InternalLogger logger = InternalLoggerFactory.getInstance(HashedWheelTimer.class);
private static final AtomicInteger INSTANCE_COUNTER = new AtomicInteger();
private static final AtomicBoolean WARNED_TOO_MANY_INSTANCES = new AtomicBoolean();
private static final int INSTANCE_COUNT_LIMIT = 64;
private static final ResourceLeakDetector<HashedWheelTimer> leakDetector =
ResourceLeakDetectorFactory.instance().newResourceLeakDetector(HashedWheelTimer.class, 1);
private static final AtomicIntegerFieldUpdater<HashedWheelTimer> WORKER_STATE_UPDATER =
AtomicIntegerFieldUpdater.newUpdater(HashedWheelTimer.class, "workerState");
private final ResourceLeakTracker<HashedWheelTimer> leak;
private final Worker worker = new Worker();
private final Thread workerThread;
public static final int WORKER_STATE_INIT = 0;
public static final int WORKER_STATE_STARTED = 1;
public static final int WORKER_STATE_SHUTDOWN = 2;
@SuppressWarnings({"unused", "FieldMayBeFinal", "RedundantFieldInitialization"})
private volatile int workerState = WORKER_STATE_INIT; // 0 - init, 1 - started, 2 - shut down
private final long tickDuration;
private final HashedWheelBucket[] wheel;
private final int mask;
private final CountDownLatch startTimeInitialized = new CountDownLatch(1);
private final Queue<HashedWheelTimeout> timeouts = PlatformDependent.newMpscQueue();
private final Queue<HashedWheelTimeout> cancelledTimeouts = PlatformDependent.newMpscQueue();
private final AtomicLong pendingTimeouts = new AtomicLong(0);
private final long maxPendingTimeouts;
private volatile long startTime;
public HashedWheelTimer() {
this(Executors.defaultThreadFactory());
}
public HashedWheelTimer(long tickDuration, TimeUnit unit) {
this(Executors.defaultThreadFactory(), tickDuration, unit);
}
public HashedWheelTimer(long tickDuration, TimeUnit unit, int ticksPerWheel) {
this(Executors.defaultThreadFactory(), tickDuration, unit, ticksPerWheel);
}
public HashedWheelTimer(ThreadFactory threadFactory) {
this(threadFactory, 100, TimeUnit.MILLISECONDS);
}
public HashedWheelTimer(ThreadFactory threadFactory, long tickDuration, TimeUnit unit) {
this(threadFactory, tickDuration, unit, 512);
}
public HashedWheelTimer(ThreadFactory threadFactory, long tickDuration, TimeUnit unit, int ticksPerWheel) {
this(threadFactory, tickDuration, unit, ticksPerWheel, true);
}
public HashedWheelTimer(ThreadFactory threadFactory, long tickDuration, TimeUnit unit, int ticksPerWheel,
boolean leakDetection) {
this(threadFactory, tickDuration, unit, ticksPerWheel, leakDetection, -1);
}
// 最终的构造器.
// 默认的tickDuration为100ms, 即走过每一个格子要100ms的时间
// 默认的ticksPerWheel是512 即默认的大小是512
public HashedWheelTimer(ThreadFactory threadFactory, long tickDuration, TimeUnit unit, int ticksPerWheel,
boolean leakDetection, long maxPendingTimeouts) {
if (threadFactory == null) {
throw new NullPointerException("threadFactory");
}
if (unit == null) {
throw new NullPointerException("unit");
}
if (tickDuration <= 0) {
throw new IllegalArgumentException("tickDuration must be greater than 0: " + tickDuration);
}
if (ticksPerWheel <= 0) {
throw new IllegalArgumentException("ticksPerWheel must be greater than 0: " + ticksPerWheel);
}
// Normalize ticksPerWheel to power of two and initialize the wheel.
wheel = createWheel(ticksPerWheel);
mask = wheel.length - 1;
// Convert tickDuration to nanos.
this.tickDuration = unit.toNanos(tickDuration);
// Prevent overflow.
if (this.tickDuration >= Long.MAX_VALUE / wheel.length) {
throw new IllegalArgumentException(
String.format("tickDuration: %d (expected: 0 < tickDuration in nanos < %d", tickDuration,
Long.MAX_VALUE / wheel.length));
}
// 启动一个worker线程
workerThread = threadFactory.newThread(worker);
leak = leakDetection || !workerThread.isDaemon() ? leakDetector.track(this) : null;
this.maxPendingTimeouts = maxPendingTimeouts;
// 这里是过载保护, 同时有64个Timer的时候,会抛出异常
if (INSTANCE_COUNTER.incrementAndGet() > INSTANCE_COUNT_LIMIT && WARNED_TOO_MANY_INSTANCES
.compareAndSet(false, true)) {
reportTooManyInstances();
}
}
@Override
protected void finalize() throws Throwable {
try {
super.finalize();
} finally {
// This object is going to be GCed and it is assumed the ship has sailed to do a proper shutdown. If
// we have not yet shutdown then we want to make sure we decrement the active instance count.
// 一共最多有64个实例, 在最终被gc之前, 实例数目-1.
if (WORKER_STATE_UPDATER.getAndSet(this, WORKER_STATE_SHUTDOWN) != WORKER_STATE_SHUTDOWN) {
INSTANCE_COUNTER.decrementAndGet();
}
}
}
// 构造方法中调用, 构建整个wheel
private static HashedWheelBucket[] createWheel(int ticksPerWheel) {
// corner case
if (ticksPerWheel <= 0) {
throw new IllegalArgumentException("ticksPerWheel must be greater than 0: " + ticksPerWheel);
}
if (ticksPerWheel > 1073741824) {
throw new IllegalArgumentException("ticksPerWheel may not be greater than 2^30: " + ticksPerWheel);
}
// wheel数组的数量, 如果不是2的整数幂, 计算成为2的整数幂, 向上去幂
ticksPerWheel = normalizeTicksPerWheel(ticksPerWheel);
// HashWheelBucket 数组, 其实就是hash轮的本质.
HashedWheelBucket[] wheel = new HashedWheelBucket[ticksPerWheel];
for (int i = 0; i < wheel.length; i++) {
// 数组中填充对象, 这个对象里面有两个HashedWheelTimeout指针,即头指针和尾指针。
wheel[i] = new HashedWheelBucket();
}
return wheel;
}
private static int normalizeTicksPerWheel(int ticksPerWheel) {
int normalizedTicksPerWheel = 1;
while (normalizedTicksPerWheel < ticksPerWheel) {
normalizedTicksPerWheel <<= 1;
}
return normalizedTicksPerWheel;
}
/**
* Starts the background thread explicitly. The background thread will
* start automatically on demand even if you did not call this method.
*
* @throws IllegalStateException if this timer has been
* {@linkplain #stop() stopped} already
*/
//按照翻译, 显式启动background 线程, 如果没有显式启动的话, 也会自动启动。
public void start() {
switch (WORKER_STATE_UPDATER.get(this)) {
case WORKER_STATE_INIT:
if (WORKER_STATE_UPDATER.compareAndSet(this, WORKER_STATE_INIT, WORKER_STATE_STARTED)) {
workerThread.start();
}
break;
case WORKER_STATE_STARTED:
break;
case WORKER_STATE_SHUTDOWN:
throw new IllegalStateException("cannot be started once stopped");
default:
throw new Error("Invalid WorkerState");
}
// Wait until the startTime is initialized by the worker.
// worker线程会启动并在运行时间startTime改为1. 这个为了确保worker启动
while (startTime == 0) {
try {
startTimeInitialized.await();
} catch (InterruptedException ignore) {
// Ignore - it will be ready very soon.
}
}
}
@Override
public Set<Timeout> stop() {
if (Thread.currentThread() == workerThread) {
throw new IllegalStateException(
HashedWheelTimer.class.getSimpleName() + ".stop() cannot be called from " + TimerTask.class
.getSimpleName());
}
// 通过cas方式将当前状态修改为shutdown状态
if (!WORKER_STATE_UPDATER.compareAndSet(this, WORKER_STATE_STARTED, WORKER_STATE_SHUTDOWN)) {
// workerState can be 0 or 2 at this moment - let it always be 2.
// 通过csd确保是shutdown状态之后, 实例数减一.
if (WORKER_STATE_UPDATER.getAndSet(this, WORKER_STATE_SHUTDOWN) != WORKER_STATE_SHUTDOWN) {
INSTANCE_COUNTER.decrementAndGet();
if (leak != null) {
boolean closed = leak.close(this);
assert closed;
}
}
return Collections.emptySet();
}
try {
boolean interrupted = false;
// 如果worker线程的状态是存活的, 就调用它的中断方法, 中断掉.
while (workerThread.isAlive()) {
workerThread.interrupt();
try {
// 让出cpu资源.
workerThread.join(100);
} catch (InterruptedException ignored) {
interrupted = true;
}
}
if (interrupted) {
Thread.currentThread().interrupt();
}
} finally {
// 实例数减一
INSTANCE_COUNTER.decrementAndGet();
if (leak != null) {
boolean closed = leak.close(this);
assert closed;
}
}
return worker.unprocessedTimeouts();
}
@Override
public Timeout newTimeout(TimerTask task, long delay, TimeUnit unit) {
if (task == null) {
throw new NullPointerException("task");
}
if (unit == null) {
throw new NullPointerException("unit");
}
if (shouldLimitTimeouts()) {
long pendingTimeoutsCount = pendingTimeouts.incrementAndGet();
if (pendingTimeoutsCount > maxPendingTimeouts) {
pendingTimeouts.decrementAndGet();
throw new RejectedExecutionException("Number of pending timeouts (" + pendingTimeoutsCount
+ ") is greater than or equal to maximum allowed pending " + "timeouts (" + maxPendingTimeouts
+ ")");
}
}
// 这个调用的start()方法是自动启动Timer , 也可手动启动.
start();
// Add the timeout to the timeout queue which will be processed on the next tick.
// During processing all the queued HashedWheelTimeouts will be added to the correct HashedWheelBucket.
// 计算出到期的时间
long deadline = System.nanoTime() + unit.toNanos(delay) - startTime;
// 创建hashedWheelTimeout对象
HashedWheelTimeout timeout = new HashedWheelTimeout(this, task, deadline);
// 加入对列中
timeouts.add(timeout);
return timeout;
}
private boolean shouldLimitTimeouts() {
return maxPendingTimeouts > 0;
}
// 过多Timer
private static void reportTooManyInstances() {
String resourceType = simpleClassName(HashedWheelTimer.class);
logger.error("You are creating too many " + resourceType + " instances. " + resourceType
+ " is a shared resource that must be reused across the JVM,"
+ "so that only a few instances are created.");
}
// 执行worker
private final class Worker implements Runnable {
// timeout的HashSet
private final Set<Timeout> unprocessedTimeouts = new HashSet<Timeout>();
// 格子数
private long tick;
@Override
public void run() {
// 初始化开始时间
startTime = System.nanoTime();
if (startTime == 0) {
// 修改状态
startTime = 1;
}
// Notify the other threads waiting for the initialization at start().
startTimeInitialized.countDown();
do {
// 返回deadline时间
final long deadline = waitForNextTick();
if (deadline > 0) {
// 计算当前桶的index位置
int idx = (int) (tick & mask);
// 处理取消的任务
processCancelledTasks();
// 捞出budget
HashedWheelBucket bucket = wheel[idx];
transferTimeoutsToBuckets();
bucket.expireTimeouts(deadline);
tick++;
}
} while (WORKER_STATE_UPDATER.get(HashedWheelTimer.this) == WORKER_STATE_STARTED);
// Fill the unprocessedTimeouts so we can return them from stop() method.
for (HashedWheelBucket bucket : wheel) {
bucket.clearTimeouts(unprocessedTimeouts);
}
for (; ; ) {
HashedWheelTimeout timeout = timeouts.poll();
if (timeout == null) {
break;
}
if (!timeout.isCancelled()) {
unprocessedTimeouts.add(timeout);
}
}
processCancelledTasks();
}
private void transferTimeoutsToBuckets() {
// 防止操作太多, 浪费太多计算时间, 一次最多操作10w个
for (int i = 0; i < 100000; i++) {
// 从任务队列中捞出来
HashedWheelTimeout timeout = timeouts.poll();
if (timeout == null) {
// all processed
break;
}
if (timeout.state() == HashedWheelTimeout.ST_CANCELLED) {
// Was cancelled in the meantime.
continue;
}
long calculated = timeout.deadline / tickDuration;
// 计算round
timeout.remainingRounds = (calculated - tick) / wheel.length;
// 计算ticks
final long ticks = Math.max(calculated, tick); // Ensure we don't schedule for past.
// 计算在wheel里的位置
int stopIndex = (int) (ticks & mask);
// 放入双向链表中
HashedWheelBucket bucket = wheel[stopIndex];
bucket.addTimeout(timeout);
}
}
private void processCancelledTasks() {
for (; ; ) {
HashedWheelTimeout timeout = cancelledTimeouts.poll();
if (timeout == null) {
// all processed
break;
}
try {
timeout.remove();
} catch (Throwable t) {
if (logger.isWarnEnabled()) {
logger.warn("An exception was thrown while process a cancellation task", t);
}
}
}
}
/**
* calculate goal nanoTime from startTime and current tick number,
* then wait until that goal has been reached.
*
* @return Long.MIN_VALUE if received a shutdown request,
* current time otherwise (with Long.MIN_VALUE changed by +1)
*/
private long waitForNextTick() {
long deadline = tickDuration * (tick + 1);
for (; ; ) {
final long currentTime = System.nanoTime() - startTime;
long sleepTimeMs = (deadline - currentTime + 999999) / 1000000;
// 时间到了
if (sleepTimeMs <= 0) {
if (currentTime == Long.MIN_VALUE) {
return -Long.MAX_VALUE;
} else {
return currentTime;
}
}
// Check if we run on windows, as if thats the case we will need
// to round the sleepTime as workaround for a bug that only affect
// the JVM if it runs on windows.
//
// See https://github.com/netty/netty/issues/356
// windows平台需要计算一下
if (PlatformDependent.isWindows()) {
sleepTimeMs = sleepTimeMs / 10 * 10;
}
try {
// 线程睡眠这些时间
Thread.sleep(sleepTimeMs);
} catch (InterruptedException ignored) {
if (WORKER_STATE_UPDATER.get(HashedWheelTimer.this) == WORKER_STATE_SHUTDOWN) {
return Long.MIN_VALUE;
}
}
}
}
public Set<Timeout> unprocessedTimeouts() {
return Collections.unmodifiableSet(unprocessedTimeouts);
}
}
private static final class HashedWheelTimeout implements Timeout {
private static final int ST_INIT = 0;
private static final int ST_CANCELLED = 1;
private static final int ST_EXPIRED = 2;
private static final AtomicIntegerFieldUpdater<HashedWheelTimeout> STATE_UPDATER =
AtomicIntegerFieldUpdater.newUpdater(HashedWheelTimeout.class, "state");
private final HashedWheelTimer timer;
private final TimerTask task;
private final long deadline;
@SuppressWarnings({"unused", "FieldMayBeFinal", "RedundantFieldInitialization"})
private volatile int state = ST_INIT;
// remainingRounds will be calculated and set by Worker.transferTimeoutsToBuckets() before the
// HashedWheelTimeout will be added to the correct HashedWheelBucket.
long remainingRounds;
// This will be used to chain timeouts in HashedWheelTimerBucket via a double-linked-list.
// As only the workerThread will act on it there is no need for synchronization / volatile.
HashedWheelTimeout next;
HashedWheelTimeout prev;
// The bucket to which the timeout was added
HashedWheelBucket bucket;
HashedWheelTimeout(HashedWheelTimer timer, TimerTask task, long deadline) {
this.timer = timer;
this.task = task;
this.deadline = deadline;
}
@Override
public Timer timer() {
return timer;
}
@Override
public TimerTask task() {
return task;
}
@Override
public boolean cancel() {
// only update the state it will be removed from HashedWheelBucket on next tick.
if (!compareAndSetState(ST_INIT, ST_CANCELLED)) {
return false;
}
// If a task should be canceled we put this to another queue which will be processed on each tick.
// So this means that we will have a GC latency of max. 1 tick duration which is good enough. This way
// we can make again use of our MpscLinkedQueue and so minimize the locking / overhead as much as possible.
timer.cancelledTimeouts.add(this);
return true;
}
void remove() {
HashedWheelBucket bucket = this.bucket;
if (bucket != null) {
bucket.remove(this);
} else if (timer.shouldLimitTimeouts()) {
timer.pendingTimeouts.decrementAndGet();
}
}
public boolean compareAndSetState(int expected, int state) {
return STATE_UPDATER.compareAndSet(this, expected, state);
}
public int state() {
return state;
}
@Override
public boolean isCancelled() {
return state() == ST_CANCELLED;
}
@Override
public boolean isExpired() {
return state() == ST_EXPIRED;
}
public void expire() {
if (!compareAndSetState(ST_INIT, ST_EXPIRED)) {
return;
}
try {
task.run(this);
} catch (Throwable t) {
if (logger.isWarnEnabled()) {
logger.warn("An exception was thrown by " + TimerTask.class.getSimpleName() + '.', t);
}
}
}
@Override
public String toString() {
final long currentTime = System.nanoTime();
long remaining = deadline - currentTime + timer.startTime;
StringBuilder buf = new StringBuilder(192).append(simpleClassName(this)).append('(').append("deadline: ");
if (remaining > 0) {
buf.append(remaining).append(" ns later");
} else if (remaining < 0) {
buf.append(-remaining).append(" ns ago");
} else {
buf.append("now");
}
if (isCancelled()) {
buf.append(", cancelled");
}
return buf.append(", task: ").append(task()).append(')').toString();
}
}
/**
* Bucket that stores HashedWheelTimeouts. These are stored in a linked-list like datastructure to allow easy
* removal of HashedWheelTimeouts in the middle. Also the HashedWheelTimeout act as nodes themself and so no
* extra object creation is needed.
*/
private static final class HashedWheelBucket {
// Used for the linked-list datastructure
private HashedWheelTimeout head;
private HashedWheelTimeout tail;
/**
* Add {@link HashedWheelTimeout} to this bucket.
*/
public void addTimeout(HashedWheelTimeout timeout) {
assert timeout.bucket == null;
timeout.bucket = this;
if (head == null) {
head = tail = timeout;
} else {
tail.next = timeout;
timeout.prev = tail;
tail = timeout;
}
}
/**
* Expire all {@link HashedWheelTimeout}s for the given {@code deadline}.
*/
public void expireTimeouts(long deadline) {
HashedWheelTimeout timeout = head;
// process all timeouts
while (timeout != null) {
HashedWheelTimeout next = timeout.next;
if (timeout.remainingRounds <= 0) {
next = remove(timeout);
if (timeout.deadline <= deadline) {
timeout.expire();
} else {
// The timeout was placed into a wrong slot. This should never happen.
throw new IllegalStateException(
String.format("timeout.deadline (%d) > deadline (%d)", timeout.deadline, deadline));
}
// 任务取消则移除
} else if (timeout.isCancelled()) {
next = remove(timeout);
} else {
// 如果round > 0 说明还要转好几圈, 所以圈数-1
timeout.remainingRounds--;
}
timeout = next;
}
}
public HashedWheelTimeout remove(HashedWheelTimeout timeout) {
HashedWheelTimeout next = timeout.next;
// remove timeout that was either processed or cancelled by updating the linked-list
if (timeout.prev != null) {
timeout.prev.next = next;
}
if (timeout.next != null) {
timeout.next.prev = timeout.prev;
}
if (timeout == head) {
// if timeout is also the tail we need to adjust the entry too
if (timeout == tail) {
tail = null;
head = null;
} else {
head = next;
}
} else if (timeout == tail) {
// if the timeout is the tail modify the tail to be the prev node.
tail = timeout.prev;
}
// null out prev, next and bucket to allow for GC.
timeout.prev = null;
timeout.next = null;
timeout.bucket = null;
if (timeout.timer.shouldLimitTimeouts()) {
timeout.timer.pendingTimeouts.decrementAndGet();
}
return next;
}
/**
* Clear this bucket and return all not expired / cancelled {@link Timeout}s.
*/
public void clearTimeouts(Set<Timeout> set) {
for (; ; ) {
HashedWheelTimeout timeout = pollTimeout();
if (timeout == null) {
return;
}
if (timeout.isExpired() || timeout.isCancelled()) {
continue;
}
set.add(timeout);
}
}
private HashedWheelTimeout pollTimeout() {
HashedWheelTimeout head = this.head;
if (head == null) {
return null;
}
HashedWheelTimeout next = head.next;
if (next == null) {
tail = this.head = null;
} else {
this.head = next;
next.prev = null;
}
// null out prev and next to allow for GC.
head.next = null;
head.prev = null;
head.bucket = null;
return head;
}
}
}
参考 :
https://wisewong.github.io/archives/e2d1a18d.html
https://cloud.tencent.com/developer/news/368207