关于Netty的ByteBuff内存泄漏问题

之前做的东华车管数据采集平台总是发生数据丢失的情况,虽然不频繁但是还是要关注一下原因,于是今天提高了Netty的Log级别,打算查找一下问题出在哪了,提高级别代码:

ServerBootstrap b =new ServerBootstrap();
b.group(bossGroup,workerGroup).channel(NioServerSocketChannel.class).option(ChannelOption.SO_BACKLOG, 2048).handler(new LoggingHandler(LogLevel.DEBUG)).childHandler(new ChildChannelHandler());

将Loglevel设置成DEBUG模式就OK了。
于是开始安心的观察日志:

2017-01-19 10:04:46  [ nioEventLoopGroup-1-0:1625429 ] - [ INFO ]  消息主体:60160308049620860021010707190117020453395443491162627407087d081f00002e37008801008c00f9
2017-01-19 10:04:49  [ nioEventLoopGroup-1-0:1628830 ] - [ ERROR ]  LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetectionLevel=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.
2017-01-19 10:04:49  [ nioEventLoopGroup-1-0:1628845 ] - [ INFO ]  入缓存队列操作结果:9
2017-01-19 10:04:49  [ nioEventLoopGroup-1-0:1628845 ] - [ INFO ]  消息主体:601603080496208600210107071901170204573954434611626262170f88091f00002e37008801008c00fa
2017-01-19 10:04:53  [ nioEventLoopGroup-1-0:1632839 ] - [ INFO ]  入缓存队列操作结果:9
2017-01-19 10:04:53  [ nioEventLoopGroup-1-0:1632839 ] - [ INFO ]  消息主体:60160308049620860021010707190117020501395443581162624817108a091f00002e37008801008c00fb
2017-01-19 10:04:55  [ nioEventLoopGroup-1-0:1634196 ] - [ INFO ]  入缓存队列操作结果:9
2017-01-19 10:04:55  [ nioEventLoopGroup-1-0:1634196 ] - [ INFO ]  消息主体:601603080496208600210107071901170205023954436011626244571288091f00002e37008801008c00fc
2017-01-19 10:04:56  [ nioEventLoopGroup-1-0:1635288 ] - [ INFO ]  入缓存队列操作结果:9
2017-01-19 10:04:56  [ nioEventLoopGroup-1-0:1635288 ] - [ INFO ]  消息主体:60160308049620860021010707190117020503395443651162624107118a091f00002e37008801008c00fd
2017-01-19 10:04:57  [ nioEventLoopGroup-1-0:1636443 ] - [ INFO ]  入缓存队列操作结果:9
2017-01-19 10:04:57  [ nioEventLoopGroup-1-0:1636443 ] - [ INFO ]  消息主体:601603080496208600210107071901170205053954437111626234671088091f00002e37008801008c00fe

注意这句话:

LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetectionLevel=advanced' or call ResourceLeakDetector.setLevel() See http://netty.io/wiki/reference-counted-objects.html for more information.

通过这句话我们可以得知,只要加入

ResourceLeakDetector.setLevel(ResourceLeakDetector.Level.ADVANCED);

将警告级别设置成Advaced即可查到更详细的泄漏信息,之后再度查看日志:

2017-01-19 10:35:59  [ nioEventLoopGroup-1-0:665092 ] - [ ERROR ]  LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records: 5
#5:
    io.netty.buffer.AdvancedLeakAwareByteBuf.readBytes(AdvancedLeakAwareByteBuf.java:435)
    com.dhcc.ObdServer.ObdServerHandler.channelRead(ObdServerHandler.java:31)
    io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
    io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
    io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:243)
    io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
    io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
    io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
    io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
    io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
    io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
    io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
    io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
    io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
#4:
    Hint: 'ObdServerHandler#0' will handle the message from this point.
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:387)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:243)
    io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
    io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
    io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
    io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
    io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
    io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
    io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
    io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
    io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
#3:
    io.netty.buffer.AdvancedLeakAwareByteBuf.release(AdvancedLeakAwareByteBuf.java:721)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:237)
    io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
    io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
    io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
    io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
    io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
    io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
    io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
    io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
    io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
#2:
    io.netty.buffer.AdvancedLeakAwareByteBuf.retain(AdvancedLeakAwareByteBuf.java:693)
    io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:277)
    io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:216)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
    io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
    io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
    io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
    io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
    io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
    io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
    io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
    io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
    io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
#1:
    io.netty.buffer.AdvancedLeakAwareByteBuf.skipBytes(AdvancedLeakAwareByteBuf.java:465)
    io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:272)
    io.netty.handler.codec.DelimiterBasedFrameDecoder.decode(DelimiterBasedFrameDecoder.java:216)
    io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:316)
    io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:230)
    io.netty.channel.ChannelHandlerInvokerUtil.invokeChannelReadNow(ChannelHandlerInvokerUtil.java:84)
    io.netty.channel.DefaultChannelHandlerInvoker.invokeChannelRead(DefaultChannelHandlerInvoker.java:153)
    io.netty.channel.PausableChannelEventExecutor.invokeChannelRead(PausableChannelEventExecutor.java:86)
    io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:389)
    io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:956)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:127)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
    io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
    io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
    io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
    io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
    io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
    io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)
Created at:
    io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:250)
    io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:155)
    io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:146)
    io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:107)
    io.netty.channel.AdaptiveRecvByteBufAllocator$HandleImpl.allocate(AdaptiveRecvByteBufAllocator.java:104)
    io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:113)
    io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:514)
    io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:471)
    io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:385)
    io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:351)
    io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
    io.netty.util.internal.chmv8.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1412)
    io.netty.util.internal.chmv8.ForkJoinTask.doExec(ForkJoinTask.java:280)
    io.netty.util.internal.chmv8.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:877)
    io.netty.util.internal.chmv8.ForkJoinPool.scan(ForkJoinPool.java:1706)
    io.netty.util.internal.chmv8.ForkJoinPool.runWorker(ForkJoinPool.java:1661)
    io.netty.util.internal.chmv8.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:126)

定位到我的代码中为:

ByteBuf buff=(ByteBuf) msg;
byte[] req=new byte[buff.readableBytes()];

于是可以确定是ByteBuff内存泄漏导致的问题,于是从这方面着手调查,发现netty5默认的分配bytebuff的方式是PooledByteBufAllocator,所以要手动回收,要不然会造成内存泄漏。
于是释放ByteBuff即可

ReferenceCountUtil.release(buff);

这里引入一个网友对于这行代码的说明:

ReferenceCountUtil.release()其实是ByteBuf.release()方法(从ReferenceCounted接口继承而来)的包装。netty4中的ByteBuf使用了引用计数(netty4实现了一个可选的ByteBuf池),每一个新分配的ByteBuf>>的引用计数值为1,每对这个ByteBuf对象增加一个引用,需要调用ByteBuf.retain()方法,而每减少一个引用,需要调用ByteBuf.release()方法。当这个ByteBuf对象的引用计数值为0时,表示此对象可回收。我这只是用ByteBuf说明,还有其他对象实现了ReferenceCounted接口,此时同理。

在检查问题的过程中,我还怀疑是不是我的Netty使用了UDP协议导致的数据丢失,于是这里附上Netty使用的是TCP还是UDP的判断方法:

关于TCP和UDP
socket可以基于TCP,也可以基于UDP。区别在于UDP的不保证数据包都正确收到,所以性能更好,但容错不高。TCP保证不错,所以性能没那么好。
UDP基本只适合做在线视频传输之类,我们的需求应该会是TCP。

那这2种方式在写法上有什么不同?网上搜到这样的说法:

在ChannelFactory 的选择上,UDP的通信选择 NioDatagramChannelFactory,TCP的通信我们选择的是NioServerSocketChannelFactory;
在Bootstrap的选择上,UDP选择的是ConnectionlessBootstrap,而TCP选择的是ServerBootstrap。

对于编解码器decoder和Encoder,以及ChannelPipelineFactory,UDP开发与TCP并没有什么区别,在此不做详细介绍。

对于ChannelHandler,是UDP与TCP区别的核心所在。大家都知道UDP是无连接的,也就是说你通过 MessageEvent 参数对象的 getChannel() 方法获取当前会话连接,但是其 isConnected() 永远都返回 false。
UDP 开发中在消息获取事件回调方法中,获取了当前会话连接 channel 对象后可直接通过 channel 的 write 方法发送数据给对端 channel.write(message, remoteAddress),第一个参数仍然是要发送的消息对象,
第二个参数则是要发送的对端 SocketAddress 地址对象。
这里最需要注意的一点是SocketAddress,在TCP通信中我们可以通过channel.getRemoteAddress()获得,但在UDP通信中,我们必须从MessageEvent中通过调用getRemoteAddress()方法获得对端的SocketAddress 地址。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 195,783评论 5 462
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 82,360评论 2 373
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 142,942评论 0 325
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,507评论 1 267
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,324评论 5 358
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,299评论 1 273
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,685评论 3 386
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,358评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,652评论 1 293
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,704评论 2 312
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,465评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,318评论 3 313
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,711评论 3 299
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,991评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,265评论 1 251
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,661评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,864评论 2 335

推荐阅读更多精彩内容