树莓派智能音箱项目:Snowboy

Contact:snowboy@kitt.ai

Website:https://snowboy.kitt.ai

Github:https://github.com/kitt-ai/snowboy

Version:1.1.1 (2017-03-24)

注意,2020年8月这个项目已经更新了,在树莓派上配置不那么好用了,仅供参考吧

refer to: http://docs.kitt.ai/snowboy/

New!

Snowboy now offers Hotword as a Service. You can programmatically use our RESTful API Calls to train a hotword model in 3 Easy Steps.

First pick a hotword.

Record it 3 times on your device

Submit the audio files through our RESTful API Calls and a model will be trained and returned.

Upon completion, the device can immediately perform hotword detection.

The following video demonstrates how it’s done using different customized hotwords in both English and Mandarin Chinese.

Introduction

Snowboy is an highly customizable hotword detection engine that is embedded real-time and is always listening (even when off-line) compatible with Raspberry Pi, (Ubuntu) Linux, and Mac OS X.

hotword (also known as wake word or trigger word) is a keyword or phrase that the computer constantly listens for as a signal to trigger other actions.

Some examples of hotword include “Alexa” on Amazon Echo, “OK Google” on some Android devices and “Hey Siri” on iPhones. These hotwords are used to initiate a full-fledged speech interaction interface. However, hotwords can be used in other ways too like performing simple command & control actions.

In one hacky solution, one can run a full ASR (Automatic Speech Recognition) to perform hotword detection. In this scenario, the device would watch for specific trigger words in the ASR transcriptions. However, ASR consumes a lot of device and bandwidth resources. In addition, it does not protect your privacy when one uses a cloud-based solution. Luckily, Snowboy was created to solve these problems!

Snowboy is:

highly customizable allowing you to freely define your own magic hotword such as (but not limited to) “open sesame”, “garage door open”, or “hello dreamhouse”. If you can think it, you can hotword it!

always listening but protects your privacy because Snowboy does not connect to the Internet or stream your voice anywhere.

light-weight and embedded allowing you to runs it on Raspberry Pi’s consuming less than 10% CPU on the smallest Pi’s (single-core 700M Hz ARMv6).

Apache licensed!

Currently, Snowboy supports:

all versions of Raspberry Pi (with Raspbian based on Debian Jessie 8.0)

64bit Mac OS X

64bit Ubuntu (12.04 and 14.04)

iOS

Android with ARMv7 CPUs

Pine 64 with Debian Jessie 8.5 (3.10.102)

Intel Edison with Ubilinux (Debian Wheezy 7.8)

Tip

For iOS/Android, please check out Snowboy’s GitHub page.

Note

If you get it to work with more devices, OS, or programming languages, feel free to send a pull request to the GitHub repository.

Downloads

You can download pre-packaged Snowboy binaries and their Python wrappers for:

64 bit Ubuntu 12.04 / 14.04

MacOS X

Raspberry Pi with Raspbian 8.0, all versions (1/2/3/Zero)

Pine 64 with Debian Jessie 8.5 (3.10.102) (Pine64)

Intel Edison with Ubilinux (Debian Wheezy 7.8) (Edison)

Or you can check out GitHub to compile a version yourself.

Note

RPi3 has an ARMv8 CPU but Raspbian recognizes it as ARMv7: The Cortex-A53 can run ARMv7 code just fine (in fact, substantially better than Pi 2 due to architectural improvements). This is where we are staying in terms of supported userspace and kernel for the time being. If you want to roll your own OS/kernel, you can add arm_control=0x200 to /boot/config.txt to boot the cores in ARMv8 state.

Quick Start

To use Snowboy, you’ll need:

A supported device with a microphone (or a microphone input)

The corresponding decoder (downloaded above)

A trained model(s) from https://snowboy.kitt.ai.

Access Microphone

We use PortAudio as a cross-platform support for audio in/out. We also use sox as a quick utility to check whether the microphone setup is correctly.

Install Sox.

On Linux systems, run:

sudoapt-getinstallpython-pyaudiopython3-pyaudiosox

On Mac, run:

brewinstallportaudiosox

Note

If you don’t have Homebrew, install it here

Install PortAudio’s Python bindings:

pipinstallpyaudio

Note

If you don’t have pip, you can install it here

Tip

If you have a Permission Error from pip, you can either use sudo pip install pyaudio or change the folder owner to yourself: sudo chown $USER -R /usr/local

To check whether you can record via your microphone, open a terminal and run:

rectemp.wav

Tip

If you see an error on Raspberry Pi’s, please refer to the Running_on_Pi section.

Decoder Structures

The decoder tarball contains the following files:

├── README.md├── _snowboydetect.so├── demo.py├── demo2.py├── light.py├── requirements.txt├── resources│   ├── ding.wav│   ├── dong.wav│   ├── common.res│   └── snowboy.umdl├── snowboydecoder.py├── snowboydetect.py└── version

_snowboydetect.so is a dynamically linked library compiled with SWIG. It has dependencies on your system’s Python2 library. All snowboy-related libraries are statically linked in this file.

snowboydetect.py is a Python wrapper file generated by SWIG. Because it is not very easy to read, we created the other high-level wrapper: snowboydecoder.py.

You should already have a trained model file from https://snowboy.kitt.ai (for example snowboy.pmdl), or you can simply use the universal model in resources/snowboy.umdl.

Running a Demo

Tip

This demo runs on any devices. But we suggest you run it on a laptop/desktop with speaker output because the demo plays a Ding sound when your hotword is triggered.

To access the simple demo in __main__ code of snowboydecoder.py, run the following command in your Terminal:

pythondemo.pysnowboy.pmdl

Here snowboy.pmdl is your trained model downloaded from https://snowboy.kitt.ai.

Note

The .pmdl suffix indicates a personal model and a .umdl suffix indicates a universal model.

When prompt, speak into your microphone to see whether snowboy detects your magic phrase.

The demo is fairly straight-forward. The following is the demo’s code:

importsnowboydecoderimportsysimportsignalinterrupted=Falsedefsignal_handler(signal,frame):globalinterruptedinterrupted=Truedefinterrupt_callback():globalinterruptedreturninterruptediflen(sys.argv)==1:print("Error: need to specify model name")print("Usage: python demo.py your.model")sys.exit(-1)model=sys.argv[1]signal.signal(signal.SIGINT,signal_handler)detector=snowboydecoder.HotwordDetector(model,sensitivity=0.5)print('Listening... Press Ctrl+C to exit')detector.start(detected_callback=snowboydecoder.ding_callback,interrupt_check=interrupt_callback,sleep_time=0.03)detector.terminate()

The main program loops at detector.start(). Every sleep_time=0.03 second, the function:

checks a ring buffer filled with microphone data to see whether a hotword is detected. If yes, call the detected_callback function.

calls the interrupt_check function: if it returns True, then break the main loop and return.

Here, we assigned detected_callback with a default snowboydecoder.ding_callback so that every time your hotword is heard the computer will play a ding sound.

Warning

Do not append () to your callback function: the correct way is to assign detected_callback=your_func instead of detected_callback=your_func(). However, what if you have parameters to assign in your callback functions? Use a lambda function! So your callback would look like: callback=lambda: callback_function(parameters).

Running on Raspberry Pi

Raspberry Pi’s are excellent hardware for running Snowboy. We support all versions of Raspberry Pi (1, 2, 3 and Zero). Supported OS is Raspbian 8.0.

Set up Audio

Warning

You’ll need a USB microphone for audio input. The on-board 3.5mm audio jack only has audio out but no audio in thus a microphone with a 3.5mm audio jack will not work.

Tip

We have successfully used both generic USB microphones and the PlayStation 3 Eye webcam. You can buy a PS 3 Eye for $5 on Amazon. Linux has builtin kernel modules for it but Windows PCs do not have free drivers for the Eye.

Before beginning, please follow Access_Microphone to install portaudio to test whether your microphone can be accessed with rec:

rect.wav

Warning

even though USB webcams should be “plug-n-play”, we experienced that for some of them you have to reboot the Pi after plugging in the webcam.

If you see errors, check whether your alsa/pulseaudio is configured properly. First list the playback device:

$ aplay -l **** List of PLAYBACK Hardware Devices **** card 0: ALSA [bcm2835 ALSA], device 0: bcm2835 ALSA [bcm2835 ALSA]  Subdevices: 8/8  Subdevice #0: subdevice #0  Subdevice #1: subdevice #1  Subdevice #2: subdevice #2  Subdevice #3: subdevice #3  Subdevice #4: subdevice #4  Subdevice #5: subdevice #5  Subdevice #6: subdevice #6  Subdevice #7: subdevice #7 card 0: ALSA [bcm2835 ALSA], device 1: bcm2835 ALSA [bcm2835 IEC958/HDMI]  Subdevices: 1/1  Subdevice #0: subdevice #0

Here the playback device is card 0, device 0, or hw:0,0 (hw:0,1 is HDMI audio out).

List your recording device:

$ arecord -l**** List of CAPTURE Hardware Devices ****card 1: Camera [Vimicro USB2.0 UVC Camera], device 0: USB Audio [USB Audio]  Subdevices: 1/1  Subdevice #0: subdevice #0

Here the recording device is card 1, device 0, or hw1:0.

Change your ~/.asoundrc file to:

pcm.!default {  type asym  playback.pcm {    type plug    slave.pcm "hw:0,0"  }  capture.pcm {    type plug    slave.pcm "hw:1,0"  }}

Try rec temp.wav again. Your microphone in should be set up properly now.

Go back to Running a Demo and run the demo.

If the demo runs is successful, try to Blink an LED light and Toggle an AC-powered Lamp with your Pi.

Warning

If you see the following error:

ImportError: /usr/lib/arm-linux-gnueabihf/libstdc++.so.6:version `GLIBCXX_3.4.20' not found (required by rpi-arm-raspbian-8.0-1.0.1/_snowboy.so)`

It means that your g++ library is not up-to-date. You are probably still using Debian Wheezy 7.5 (check with lsb_release -a). However we compiled the snowboy library under Raspbian based on Debian Jessie 8.0 that comes with g++-4.9. You can either upgrade your Raspbian version to Jessie, or follow this post to install g++-4.9 on your Wheezy, or compile a version youself from GitHub.

Tip

If you cannot hear any audio from the 3.5mm audio jack, the audio may be streamed to the HDMI port. Follow this config to change the audio output to the 3.5mm audio jack.

Blink an LED light

Wire an LED

Wiring an LED onto Pi’s GPIO ports is very easy. However, note that the LED has a shorter leg and a longer leg. The shorter leg is usually connected to ground (GND). The following demonstrates how to wire your LED to your Pi’s GPIO:

A few hundred Ohms would be enough for the resistor.

Control an LED with Python

We use the RPi.GPIO Python module to control and LED:

importRPi.GPIOasGPIOimporttimeclassLight(object):def__init__(self,port):self.port=portGPIO.setmode(GPIO.BCM)GPIO.setup(self.port,GPIO.OUT)self.on_state=GPIO.HIGHself.off_state=notself.on_statedefset_on(self):GPIO.output(self.port,self.on_state)defset_off(self):GPIO.output(self.port,self.off_state)defis_on(self):returnGPIO.input(self.port)==self.on_statedefis_off(self):returnGPIO.input(self.port)==self.off_statedeftoggle(self):ifself.is_on():self.set_off()else:self.set_on()defblink(self,t=0.3):self.set_off()self.set_on()time.sleep(t)self.set_off()if__name__=="__main__":light=Light(17)whileTrue:light.blink()time.sleep(0.7)

Save the file as light.py, then run:

sudopythonlight.py

The LED light will blink approximately once per second.

Blink an LED with Snowboy

Replace Snowboy’s callback function with LED’s blink() function:

importsnowboydecoderimportsysimportsignalfromlightimportLightinterrupted=Falsedefsignal_handler(signal,frame):globalinterruptedinterrupted=Truedefinterrupt_callback():globalinterruptedreturninterruptediflen(sys.argv)==1:print("Error: need to specify model name")print("Usage: python demo.py your.model")sys.exit(-1)model=sys.argv[1]signal.signal(signal.SIGINT,signal_handler)detector=snowboydecoder.HotwordDetector(model,sensitivity=0.5)print('Listening... Press Ctrl+C to exit')led=Light(17)detector.start(detected_callback=led.blink,interrupt_check=interrupt_callback,sleep_time=0.03)detector.terminate()

The only place that changes is:

led=Light(17)detector.start(detected_callback=led.blink,interrupt_check=interrupt_callback,sleep_time=0.03)

which will blink the LED connected to GPIO pin 17 when your hotword is detected:

sudopythondemo.pyyour.pmdl

Toggle an AC-powered Lamp

Controlling an LED light is pretty simple so let’s go bigger and control some real home appliances!

In this example, we will use Raspberry Pi’s GPIO output to connect and break a higher voltage AC circuit.

This can be done with help of a bipolar transistor. Luckily, one has already been built thanks to a successful kickstarter campaign. You can purchase the IoT relay on Amazon for $15 (as of April 2016).

The mechanism of the IoT Relay is very simple:

When red wire has high DC voltage (say, 3.3V or 12V), the top two “normally ON” outlets will turn off and the bottom two “normally OFF” outlets will turn on

When red wire has no DC voltage, the top two “normally ON” outlets will turn on and the bottom two “normally OFF” outlets will turn off

Note

The top two and bottom two outlets can only be controlled in two groups. There is no way to control each of them individually.

To connect the IoT Relay to your Raspberry Pi, connect the red wire of the IoT Relay to Pin 17 of a Raspberry Pi. You can simply reuse light.py or demo.py above to control any home appliances that are plugged into the IoT Relay!

The following is demonstrates Snowboy on a Raspberry Pi controlling three small LED lights on the right and a lamp on the left through the IoT relay:

RESTful API Calls

Snowboy provides the following HTTP endpoints for you to train a model without using the website:

/api/v1/train: train a model with 3 .wav files

All uploaded .wav files will not be visible on the Snowboy library, so your privacy is protected. However, we do not provide an API to retrieve these files either.

/api/v1/train

The /api/v1/train endpoint provides an opportunity to:

programmatically train a model without using the web interface

achieve better acoustic consistency

Note

Since the training and test voice samples will be collected off the same microphone, there will be no distortions that result from the usage of different microphones

You can define truly customized hotword for each of your end customer. Just ask them to say the hotword 3 times and a model will be trained on the fly!

Endpoint: https://snowboy.kitt.ai/api/v1/train/

Type: POST

Return: a binary personal model (.pmdl), or error

ParameterRequiredValue

voice_samplesYA list of 3 voice samples in .wav format.

tokenYSecret user token

nameYString, or “unknown” if we don’t know hotword name

languageNar (Arabic), zh (Chinese), nl (Dutch), en (English), fr (French), dt (German), hi (Hindi), it (Italian), jp (Japanese), ko (Korean), fa (Persian), pl (Polish), pt (Portuguese), ru (Russian), es (Spanish), ot (Other)

age_groupN0_9, 10_19, 20_29, 30_39, 40_49, 50_59, 60+

genderNF/M

microphoneNString, your microphone type

Note

API token can be obtained by logging into https://snowboy.kitt.ai, click on “Profile settings”:

The following is a sample call script using Python. Save the file as training_service.py:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

importsysimportbase64importrequestsdefget_wave(fname):withopen(fname)asinfile:returnbase64.b64encode(infile.read())endpoint="https://snowboy.kitt.ai/api/v1/train/"############# MODIFY THE FOLLOWING #############token=""hotword_name="???"language="en"age_group="20_29"gender="M"microphone="macbook microphone"############### END OF MODIFY ##################if__name__=="__main__":try:[_,wav1,wav2,wav3,out]=sys.argvexceptValueError:print"Usage: %s wave_file1 wave_file2 wave_file3 out_model_name"%sys.argv[0]sys.exit()data={"name":hotword_name,"language":language,"age_group":age_group,"gender":gender,"microphone":microphone,"token":token,"voice_samples":[{"wave":get_wave(wav1)},{"wave":get_wave(wav2)},{"wave":get_wave(wav3)}]}response=requests.post(endpoint,json=data)ifresponse.ok:withopen(out,"w")asoutfile:outfile.write(response.content)print"Saved model to '%s'."%outelse:print"Request failed."printresponse.text

To execute, run the following command:

pythontraining_service.py1.wav2.wav3.wavsaved_model.pmdl

..note:: You can use the rec command to record a .wav file on terminal:

rec-r16000-c1-b16-esigned-integer1.wav

The following is a sample call script in bash with curl:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

#! /usr/bin/env bashENDPOINT="https://snowboy.kitt.ai/api/v1/train/"############# MODIFY THE FOLLOWING #############TOKEN="??"NAME="??"LANGUAGE="en"AGE_GROUP="20_29"GENDER="M"MICROPHONE="PS3 Eye"############### END OF MODIFY ##################if[["$#"!=4]];thenprintf"Usage: %s wave_file1 wave_file2 wave_file3 out_model_name"$0exitfiWAV1=`base64$1`WAV2=`base64$2`WAV3=`base64$3`OUTFILE="$4"cat<<EOF >data.json    {        "name": "$NAME",        "language": "$LANGUAGE",        "age_group": "$AGE_GROUP",        "token": "$TOKEN",        "gender": "$GENDER",        "microphone": "$MICROPHONE",        "voice_samples": [            {"wave": "$WAV1"},            {"wave": "$WAV2"},            {"wave": "$WAV3"}        ]    }    EOFcurl -H"Content-Type: application/json"-X POST -d @data.json$ENDPOINT>$OUTFILE

Quota and Rate Limit

Each user gets 1000 free API calls for each end point every 30 days with a rate limited of 1 call per second.

If you’d like to purchase more, please send us an email at snowboy@kitt.ai

Advanced Usage

Multiple Models and Callbacks

So far we have worked with only one model that can dictate a binary state. Wouldn’t it nice to listen to multiple models at the same time?

demo2.py demonstrates how to listen to multiple models at the same time:

importsnowboydecoderimportsysimportsignalinterrupted=Falsedefsignal_handler(signal,frame):globalinterruptedinterrupted=Truedefinterrupt_callback():globalinterruptedreturninterruptediflen(sys.argv)!=3:print("Error: need to specify 2 model names")print("Usage: python demo.py 1st.model 2nd.model")sys.exit(-1)models=sys.argv[1:]# capture SIGINT signal, e.g., Ctrl+Csignal.signal(signal.SIGINT,signal_handler)sensitivity=[0.5]*len(models)detector=snowboydecoder.HotwordDetector(models,sensitivity=sensitivity)callbacks=[lambda:snowboydecoder.play_audio_file(snowboydecoder.DETECT_DING),lambda:snowboydecoder.play_audio_file(snowboydecoder.DETECT_DONG)]print('Listening... Press Ctrl+C to exit')# main loop# make sure you have the same numbers of callbacks and modelsdetector.start(detected_callback=callbacks,interrupt_check=interrupt_callback,sleep_time=0.03)detector.terminate()

In this example, we used two models for the decoder and provided two callback functions. If the first hotword is detected, it’ll play a Ding sound. If the second hotword is detected, it’ll play a Dong sound.

Note

You are not limited to just using only two models nor are you limited to using only the personal or the universal models. You can give HotwordDetector a mixture of multiple personal and universal models so long as your CPU is powerful enough to process them all.

FAQ

What’s the CPU/RAM usage?

Snowboy takes minimal CPU on modern computers. On a Raspberry Pi’s with decade-old CPU chips, it takes less than 5% ~ 10% of CPU. In terms of memory usage, the PortAudio Python wrapper usually uses about 10MB of RAM while the standalone C binary uses less than 2MB.

NameCPUCPU UsageRAM Usage

RPi 1single-core 700MHz ARMv6<10%Python: < 15MB

C: < 2MB

RPi 2quad-core 900MHz ARMv7<5%

RPi 3quad-core 1.2GHz ARMv8<5%

RPi Zerosingle-core 1GHz ARMv6<5%

MacbooksIntel Core i3/5/7<1%

What is detection sensitivity

Detection sensitivity controls how sensitive the detection is. It is a value between 0 and 1. Increasing the sensitivity value lead to better detection rate, but also higher false alarm rate. It is an important parameter that you should play with in your actual application.

What Audio format does Snowboy support?

Snowboy supports WAVE files (with linear PCM, 8-bits unsigned integer, 16-bits signed integer or 32-bits signed integer). See SampleRate(), NumChannels() and BitsPerSample() for the required sampling rate, number of channels and bits per sample values.

To convert your .wav file to Snowboy supported format, you can use sox:

sox -t wav YOUR_ORIGINAL.wav -t wav -r 16000 -b 16 -e signed-integer -c 1 YOUR_PROCESSED.wav

My pmdl model works well for me, but does not work well for others

Models with suffix pmdl are personal models thus they are supposed to only work well for the person who provides the audio samples. If you are looking for a model that works well for everyone, you should use the universal model (with suffix umdl).

My trained model works well on laptops but not on Pi’s

This is due to the acoustic distortion that results from the different microphones. If you record your voice with two different microphones (one on your laptop and the other on your Pi) and then play them (play t.wav), you will hear that they sound very differently (even though it is the same voice)!

The best solution is to use the same recording to both train your model and test your voice. If you want to use Snowboy on a Raspberry Pi, first record your voice with rec t.wav (make sure to apt-get install sox) and then upload the 3 recordings to the Snowboy website using uploading button:

Once the training has completed, you can download the trained model.

Alternatively, you can also use the RESTful API Calls to do this directly without using the web interface.

Tip

Another trick is to play with the audio gain (see the answer regarding audio_gain below). We have noted that the USB microphones on a Raspberry Pi usually have low volume, thus increasing the audio gain may help.

The volume of my recording is too low/high

When you construct a HotwordDetector from snowboydecoder, there is an audio_gain parameter:

HotwordDetector(decoder_model,resource=RESOURCE_FILE,sensitivity=[],audio_gain=1)

Set audio_gain to be larger than 1 if your test recording’s volume is too low, or smaller than 1 if too high.

Does Snowboy come with VAD?

Yes it does! VAD is Voice Activity Detection which usually detects whether there’s human voice in the audio. It needs much less resources than hotword detection. Thus, Snowboy uses VAD as a filtering layer before hotword detection to reduce CPU usage.

How to use Snowboy’s VAD to detect voice and silence?

The return value of SnowboyDetect.RunDetection() function indicates silence, voice, error, and triggered words:

returnmeaning

-2silence

-1error

0voice

1,..triggered index

Check out snowboydecoder.py for usages.

Who wrote Snowboy?

The KITT.AI co-founders. Core modules of Snowboy are created by Guoguo Chen, who is also a contributor to the open-source speech recognition software Kaldi and Microsoft Cognitive Toolkit CNTK.

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 202,607评论 5 476
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,047评论 2 379
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 149,496评论 0 335
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,405评论 1 273
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,400评论 5 364
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,479评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,883评论 3 395
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,535评论 0 256
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,743评论 1 295
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,544评论 2 319
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,612评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,309评论 4 318
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,881评论 3 306
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,891评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,136评论 1 259
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,783评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,316评论 2 342