使用AVAssetReader、AVAssetWriter编解码视频

本篇作为 使用AVFoundation处理视频 的续篇;

上篇讲到AVAssetExportSession的局限性,一个更好的方案是使用AVAssetWriter重新编码视频:

与AVAssetExportSession相比,AVAssetWriter优势体现在它对输出进行编码时能够进行更加细致的压缩设置控制。可以指定诸如关键帧间隔、视频比特率、像素宽高比和纯净光圈H.264配置文件等设置;

基础

  • AVAssetReader,读取资源(可看做解码)
  • AVAssetReaderOutput,读取资源的输出配置
  • AVAssetReaderTrackOutput
  • AVAssetReaderVideoCompositionOutput
  • AVAssetReaderAudioMixOutput
  • AVAssetWriter,写资源(可看做编码)
  • AVAssetWriterInput,编码的输入配置
  • CMSampleBuffer,缓存的数据

AVAssetReader、AVAssetReaderOutput配套使用,决定以什么样的配置解码成buffer数据;
AVAssetWriter、AVAssetWriterInput配套使用,决定将数据以什么配置编码成视频
CMSampleBuffer为编码的数据,视频经AVAssetReader后输出CMSampleBuffer;经AVAssetWriter可以重新将CMSampleBuffer编码成视频;

AVAssetReader

AVAssetReader provides services for obtaining media data from an asset.

AVAssetReader可以从资源中读取媒体数据,每个AVAssetReader和单个AVAsset关联,AVAsset可能包含多个tracks,相当于一个AVAssetReader可以读取多个tracks数据。
如果需要AVAssetReader读取多个AVAsset数据,可以将多个AVAsset合成一个AVComposition,AVAssetReader关联/读取AVComposition即可;

AVAssetReader读取数据时,必须添加output(AVAssetReaderOutput) ,来配置media data怎样读取,可以为每个轨道添加不同的output

    open var outputs: [AVAssetReaderOutput] { get }
    open func add(_ output: AVAssetReaderOutput)

设置output后, startReading开始读取数据;

open func startReading() -> Bool

AVAssetReaderOutput

读取资源的配置,基类,具体需要使用以下子类

  • AVAssetReaderTrackOutput
    针对单个轨道数据配置
  • AVAssetReaderVideoCompositionOutput
    针对视频AVVideoComposition配置,同AVAssetExportSession 的videoComposition一样作用,可以处理视频的尺寸、背景等
  • AVAssetReaderAudioMixOutput
    针对音频AVAudioMix配置,同AVAssetExportSession 的audioMix一样作用,可以调节音频

可以通过copyNextSampleBuffer获取到读取的视频数据

open func copyNextSampleBuffer() -> CMSampleBuffer?

AVAssetWriter

AVAssetWriter provides services for writing media data to a new file

AVAssetWriter将媒体数据以指定文件格式、指定配置写入新的的单个文件;
和AVAssetReader不同,AVAssetWriter不需要关联AVAsset,可以从多个资源写入数据;

AVAssetWriter写数据时必须添加input(AVAssetWriterInput),来配置media data怎样写入

open var inputs: [AVAssetWriterInput] { get }
open func add(_ input: AVAssetWriterInput)

设置input后,startWriting开始写数据

open func startWriting() -> Bool

然后需要开启会话WriteSession

open func startSession(atSourceTime startTime: CMTime)

写入完成后,需要关闭会话

// 标记写入完成,同时也会endSession
open func finishWriting() async

AVAssetWriterInput

输入配置,可以为不同类型资源配置不同input

// 是否有数据需要写入
open var isReadyForMoreMediaData: Bool { get }

// 请求写入数据
open func requestMediaDataWhenReady(on queue: DispatchQueue, using block: @escaping () -> Void)

// 缓存数据
open func append(_ sampleBuffer: CMSampleBuffer) -> Bool

// 当前输入完成
open func markAsFinished()

需要说明的是,AVAssetWriter和AVAssetReader并不一定需要成对使用;实际上AVAssetWriter只需要拿到数据SampleBuffer即可,这个数据可以由很多途径得到,可以是通过AVAssetReader获取,可以是相机拍摄视频时获取实时流,还可以是图片数据转换而来;关于图片转视频,下面会着重编码实现


outputSettings

AVAssetReaderOutput和AVAssetWriterInput都有outputSettings设置(字典),outputSettings才是控制解、编码视频的核心;

AVVideoSettings

  • AVVideoCodecKey 编码方式
  • AVVideoWidthKey 像素宽
  • AVVideoHeightKey 像素高
  • AVVideoCompressionPropertiesKey 压缩设置:
    • AVVideoAverageBitRateKey 每秒bit数,720*1280适合3000000
    • AVVideoProfileLevelKey 画质级别 从低到高分别是BP、EP、MP、HP
    • AVVideoMaxKeyFrameIntervalKey

AVAudioSettings

  • AVFormatIDKey 音频格式
  • AVNumberOfChannelsKey
  • AVSampleRateKey 采样率
  • AVEncoderBitRateKey 编码码率

代码实现

老规矩先上UML

将多个视频合并一个视频

// 创建资源集合composition及可编辑轨道
let composition = AVMutableComposition()
// 视频轨道
let videoCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID:
kCMPersistentTrackID_Invalid)
// 音频轨道
let audioCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID:
kCMPersistentTrackID_Invalid)
var insertTime = CMTime.zero
for url in urls {
    autoreleasepool {
        // 获取视频资源 并分离出视频、音频轨道
        let asset = AVURLAsset(url: url)
        let videoTrack = asset.tracks(withMediaType: .video).first
        let audioTrack = asset.tracks(withMediaType: .audio).first
        let videoTimeRange = videoTrack?.timeRange
        let audioTimeRange = audioTrack?.timeRange
        
        // 将多个视频轨道合到一个轨道上(AVMutableCompositionTrack)
        if let insertVideoTrack = videoTrack, let insertVideoTime = videoTimeRange {
            do {
                // 在某个时间点插入轨道
                try videoCompositionTrack?.insertTimeRange(CMTimeRange(start: .zero, duration:
insertVideoTime.duration), of: insertVideoTrack, at: insertTime)
            } catch let e {
                callback(false, e)
                return
            }
        }
        
        // 将多个音频轨道合到一个轨道上(AVMutableCompositionTrack)
        if let insertAudioTrack = audioTrack, let insertAudioTime = audioTimeRange {
            do {
                try audioCompositionTrack?.insertTimeRange(CMTimeRange(start: .zero, duration:
insertAudioTime.duration), of: insertAudioTrack, at: insertTime)
            } catch let e {
                callback(false, e)
                return
            }
        }
        
        insertTime = insertTime + asset.duration
    }
}
// -----读取数据----
let videoTracks = composition.tracks(withMediaType: .video)
let audioTracks = composition.tracks(withMediaType: .audio)
guard let videoTrack = videoTracks.first, let audioTrack = audioTracks.first else {
    callback(false, nil)
    return
}
// AVAssetReader
do {
    reader = try AVAssetReader(asset: composition)
} catch let e {
    callback(false, e)
    return
}
reader.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
// 音视频uncompressed设置 (使用AVAssetReaderTrackOutput必要使用uncompressed设置)
let audioOutputSetting = [
    AVFormatIDKey: kAudioFormatLinearPCM
]
let videoOutputSetting = [
    kCVPixelBufferPixelFormatTypeKey as String: UInt32(kCVPixelFormatType_422YpCbCr8)
]
videoOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoOutputSetting)
videoOutput.alwaysCopiesSampleData = false
if reader.canAdd(videoOutput) {
    reader.add(videoOutput)
}
audioOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: audioOutputSetting)
audioOutput.alwaysCopiesSampleData = false
if reader.canAdd(audioOutput) {
    reader.add(audioOutput)
}
reader.startReading()
// -----写数据----
// AVAssetWriter
do {
    writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
} catch let e {
    callback(false, e)
    return
}
writer.shouldOptimizeForNetworkUse = true
let videoInputSettings: [String : Any] = [
    AVVideoCodecKey: AVVideoCodecType.h264,
    AVVideoWidthKey: 720,
    AVVideoHeightKey: 1280,
    AVVideoCompressionPropertiesKey: [
        AVVideoAverageBitRateKey: 1000000,
        AVVideoProfileLevelKey: AVVideoProfileLevelH264High40
    ]
]
let audioInputSettings: [String : Any] = [
    AVFormatIDKey: NSNumber(value: kAudioFormatMPEG4AAC),
    AVNumberOfChannelsKey: NSNumber(value: 2),
    AVSampleRateKey: NSNumber(value: 44100),
    AVEncoderBitRateKey: NSNumber(value: 128000)
]
// AVAssetWriterInput
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
if writer.canAdd(videoInput) {
    writer.add(videoInput)
}
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputSettings)
if writer.canAdd(audioInput) {
    writer.add(audioInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
// 准备写入数据
writeGroup.enter()
videoInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if wself.encodeReadySamples(from: wself.videoOutput, to: wself.videoInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.enter()
audioInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if wself.encodeReadySamples(from: wself.audioOutput, to: wself.audioInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.notify(queue: inputQueue) {
    self.writer.finishWriting {
        callback(true, nil)
    }
}

多个视频合成(设置VideoComposition,AudioMix)

let composition = AVMutableComposition()
guard let videoCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID:
kCMPersistentTrackID_Invalid) else {
    callback(false, nil)
    return
}
let audioCompositionTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID:
kCMPersistentTrackID_Invalid)
// layerInstruction 用于更改视频图层
let vcLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoCompositionTrack)
var layerInstructions = [vcLayerInstruction]
var audioParameters: [AVMutableAudioMixInputParameters] = []
var insertTime = CMTime.zero
for url in urls {
    autoreleasepool {
        let asset = AVURLAsset(url: url)
        let videoTrack = asset.tracks(withMediaType: .video).first
        let audioTrack = asset.tracks(withMediaType: .audio).first
        let videoTimeRange = videoTrack?.timeRange
        let audioTimeRange = audioTrack?.timeRange
        
        if let insertVideoTrack = videoTrack, let insertVideoTime = videoTimeRange {
            do {
                try videoCompositionTrack.insertTimeRange(CMTimeRange(start: .zero, duration:
insertVideoTime.duration), of: insertVideoTrack, at: insertTime)
                
                // 更改Transform 调整方向、大小
                var trans = insertVideoTrack.preferredTransform
                let size = insertVideoTrack.naturalSize
                let orientation = VideoEditHelper.orientationFromVideo(assetTrack: insertVideoTrack)
                switch orientation {
                    case .portrait:
                        let scale = MMAssetExporter.renderSize.height / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: size.height, y: 0)
                        trans = trans.rotated(by: .pi / 2.0)
                    case .landscapeLeft:
                        let scale = MMAssetExporter.renderSize.width / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: size.width, y: size.height +
(MMAssetExporter.renderSize.height - size.height * scale) / scale / 2.0)
                        trans = trans.rotated(by: .pi)
                    case .portraitUpsideDown:
                        let scale = MMAssetExporter.renderSize.height / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: 0, y: size.width)
                        trans = trans.rotated(by: .pi / 2.0 * 3)
                    case .landscapeRight:
                        // 默认方向
                        let scale = MMAssetExporter.renderSize.width / size.width
                        trans = CGAffineTransform(scaleX: scale, y: scale)
                        trans = trans.translatedBy(x: 0, y: (MMAssetExporter.renderSize.height -
size.height * scale) / scale / 2.0)
                }
                
                vcLayerInstruction.setTransform(trans, at: insertTime)
                layerInstructions.append(vcLayerInstruction)
            } catch let e {
                callback(false, e)
                return
            }
        }
        if let insertAudioTrack = audioTrack, let insertAudioTime = audioTimeRange {
            do {
                try audioCompositionTrack?.insertTimeRange(CMTimeRange(start: .zero, duration:
insertAudioTime.duration), of: insertAudioTrack, at: insertTime)
                
                let adParameter = AVMutableAudioMixInputParameters(track: insertAudioTrack)
                adParameter.setVolume(1, at: .zero)
                audioParameters.append(adParameter)
            } catch let e {
                callback(false, e)
                return
            }
        }
        
        insertTime = insertTime + asset.duration
    }
}
let videoTracks = composition.tracks(withMediaType: .video)
let audioTracks = composition.tracks(withMediaType: .audio)
let videoComposition = AVMutableVideoComposition()
// videoComposition必须指定 帧率frameDuration、大小renderSize
videoComposition.frameDuration = CMTime(value: 1, timescale: 30)
videoComposition.renderSize = MMAssetExporter.renderSize
let vcInstruction = AVMutableVideoCompositionInstruction()
vcInstruction.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
vcInstruction.backgroundColor = UIColor.red.cgColor // 可以设置视频背景颜色
vcInstruction.layerInstructions = layerInstructions
videoComposition.instructions = [vcInstruction]
let audioMix = AVMutableAudioMix()
audioMix.inputParameters = audioParameters
// AVAssetReader
do {
    reader = try AVAssetReader(asset: composition)
} catch let e {
    callback(false, e)
    return
}
reader.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
// AVAssetReaderOutput
videoOutput = AVAssetReaderVideoCompositionOutput(videoTracks: videoTracks, videoSettings: nil)
videoOutput.alwaysCopiesSampleData = false
videoOutput.videoComposition = videoComposition
if reader.canAdd(videoOutput) {
    reader.add(videoOutput)
}
audioOutput = AVAssetReaderAudioMixOutput(audioTracks: audioTracks, audioSettings: nil)
audioOutput.alwaysCopiesSampleData = false
audioOutput.audioMix = audioMix
if reader.canAdd(audioOutput) {
    reader.add(audioOutput)
}
if !reader.startReading() {
    callback(false, reader.error)
    return
}
// -----写数据----
// AVAssetWriter
do {
    writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
} catch let e {
    callback(false, e)
    return
}
writer.shouldOptimizeForNetworkUse = true
let videoInputSettings: [String : Any] = [
    AVVideoCodecKey: AVVideoCodecType.h264,
    AVVideoWidthKey: 720,
    AVVideoHeightKey: 1280,
    AVVideoCompressionPropertiesKey: [
        AVVideoAverageBitRateKey: 1000000,
        AVVideoProfileLevelKey: AVVideoProfileLevelH264High40
    ]
]
let audioInputSettings: [String : Any] = [
    AVFormatIDKey: NSNumber(value: kAudioFormatMPEG4AAC),
    AVNumberOfChannelsKey: NSNumber(value: 2),
    AVSampleRateKey: NSNumber(value: 44100),
    AVEncoderBitRateKey: NSNumber(value: 128000)
]
// AVAssetWriterInput
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
if writer.canAdd(videoInput) {
    writer.add(videoInput)
}
audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputSettings)
if writer.canAdd(audioInput) {
    writer.add(audioInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
// 准备写入数据
writeGroup.enter()
videoInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if wself.encodeReadySamples(from: wself.videoOutput, to: wself.videoInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.enter()
audioInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    if wself.encodeReadySamples(from: wself.audioOutput, to: wself.audioInput) {
        wself.writeGroup.leave()
    }
}
writeGroup.notify(queue: inputQueue) {
    self.writer.finishWriting {
        callback(true, nil)
    }
}

设置VideoComposition,AudioMix,AVAssetReader的output需要使用AVAssetReaderVideoCompositionOutput、AVAssetReaderAudioMixOutput;
同AVAssetExportSession一样,最终设置VideoComposition,AudioMix实现调节视频size、旋转方向、背景颜色、音量;同样也能给视频添加水印等;

使用AVAssetReader、AVAssetWriter和AVAssetExportSession相比,对于资源、轨道的分离合成等操作都是一样的;对于AVAssetReader、AVAssetWriter可以进一步封装成类似AVAssetExportSession使用

public var composition: AVComposition!
public var videoComposition: AVVideoComposition!
public var audioMix: AVAudioMix!
public var outputUrl: URL!
public var videoInputSettings: [String : Any]?
public var videoOutputSettings: [String : Any]?
public var audioInputSettings: [String : Any]?
public var audioOutputSettings: [String : Any]?

public func exportAsynchronously(completionHandler callback: @escaping VideoResult) {
    let videoTracks = composition.tracks(withMediaType: .video)
    let audioTracks = composition.tracks(withMediaType: .audio)
    
    do {
        reader = try AVAssetReader(asset: composition)
    } catch let e {
        callback(false, e)
        return
    }
    reader.timeRange = CMTimeRange(start: .zero, duration: composition.duration)
    
    videoOutput = AVAssetReaderVideoCompositionOutput(videoTracks: videoTracks, videoSettings:
videoOutputSettings)
    videoOutput.alwaysCopiesSampleData = false
    videoOutput.videoComposition = videoComposition
    if reader.canAdd(videoOutput) {
        reader.add(videoOutput)
    }
    audioOutput = AVAssetReaderAudioMixOutput(audioTracks: audioTracks, audioSettings: audioOutputSet
    audioOutput.alwaysCopiesSampleData = false
    audioOutput.audioMix = audioMix
    if reader.canAdd(audioOutput) {
        reader.add(audioOutput)
    }
    
    if !reader.startReading() {
        callback(false, reader.error)
        return
    }
    
    // -----写数据----
    do {
        writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
    } catch let e {
        callback(false, e)
        return
    }
    writer.shouldOptimizeForNetworkUse = true
    
    // AVAssetWriterInput
    videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
    if writer.canAdd(videoInput) {
        writer.add(videoInput)
    }
    
    audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputSettings)
    if writer.canAdd(audioInput) {
        writer.add(audioInput)
    }
    
    writer.startWriting()
    writer.startSession(atSourceTime: .zero)
    
    // 准备写入数据
// videoInput.requestMediaDataWhenReady
// audioInput.requestMediaDataWhenReady
// encodeReadySamples
   ...
}

图片合成为视频

do {
    writer = try AVAssetWriter(outputURL: outputUrl, fileType: .mp4)
} catch let e {
    callback(false, e)
    return
}
writer.shouldOptimizeForNetworkUse = true
videoInputSettings = [
    AVVideoCodecKey: AVVideoCodecType.h264,
    AVVideoWidthKey: MMAssetExporter.renderSize.width,
    AVVideoHeightKey: MMAssetExporter.renderSize.height
]
videoInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoInputSettings)
let adaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoInput,
sourcePixelBufferAttributes: nil)
if writer.canAdd(videoInput) {
    writer.add(videoInput)
}
writer.startWriting()
writer.startSession(atSourceTime: .zero)
let pixelBuffers = images.map { image in
    self.pixelBuffer(from: image)
}
let seconds = 2 // 每张图片显示时长 s
let timescale = 30 // 1s 30帧
let frames = images.count * seconds * timescale // 总帧数
var frame = 0
videoInput.requestMediaDataWhenReady(on: inputQueue) { [weak self] in
    guard let wself = self else {
        callback(false, nil)
        return
    }
    
    if frame >= frames {
        // 全部数据写入完毕
        wself.videoInput.markAsFinished()
        wself.writer.finishWriting {
            callback(true, nil)
        }
        return
    }
    
    let imageIndex = frame / (seconds * timescale)
    let time = CMTime(value: CMTimeValue(frame), timescale: CMTimeScale(timescale))
    let pxData = pixelBuffers[imageIndex]
    if let cvbuffer = pxData {
        adaptor.append(cvbuffer, withPresentationTime: time)
    }
    
    frame += 1
}

这里使用到了AVAssetWriterInputPixelBufferAdaptor,前面也提到了AVASsetWriter数据并不一定需要从AVAssetReader得到,它可以接受其他各种数据;AVAssetWriterInputPixelBufferAdaptor就起到适配左右,可以让AVASsetWriter写入不同数据;
AVAssetWriterInputPixelBufferAdaptor创建时需要在writer.startWriting()前

遇到的问题

-[AVAssetWriterInput appendSampleBuffer:] Cannot append sample buffer: Input buffer must be in an uncompressed format when outputSettings is not nil

原因:使用了AVAssetReaderTrackOutput,且outSettings未设置uncompressed的配置(如果为nil则使用源视频设置);增加uncompressed 的outSettings即可;

[AVAssetReaderTrackOutput copyNextSampleBuffer] cannot copy next sample buffer before adding this output to an instance of AVAssetReader (using -addOutput:) and calling -startReading on that asset reader'

原因,在读取数据过程中,AVAssetReader被释放了;需要强引用AVAssetReader对象;
详见: https://stackoverflow.com/questions/27608510/avfoundation-add-first-frame-to-video

  1. reader.startReading()报错
Error Domain=AVFoundationErrorDomain Code=-11841

详见: https://www.cnblogs.com/song-jw/p/9530249.html

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 196,165评论 5 462
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 82,503评论 2 373
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 143,295评论 0 325
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,589评论 1 267
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 61,439评论 5 358
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,342评论 1 273
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,749评论 3 387
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,397评论 0 255
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,700评论 1 295
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,740评论 2 313
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,523评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,364评论 3 314
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,755评论 3 300
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,024评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,297评论 1 251
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,721评论 2 342
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,918评论 2 336

推荐阅读更多精彩内容