kubernetes1.14使用kubeadm部署高可用master

1.下载ks8的yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

主节点安装:yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
slave节点:yum install -y kubelet kubeadm --disableexcludes=kubernetes

Package kubelet-1.14.1-0.x86_64
Package kubeadm-1.14.1-0.x86_64
Package kubectl-1.14.1-0.x86_64

2.下载docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum  makecache 
yum list docker-ce --showduplicates|sort -r
yum install docker-ce-<version>  ( yum install -y docker-ce-18.06.3.ce-3.el7)

3.初始化配置

3.1关闭selinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

3.2开启路由转发

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl -p /etc/sysctl.d/k8s.conf
如果找不到文件,输入:modprobe br_netfilter,然后在执行即可
输入:sysctl --system 查看应用了k8s.conf

3.3关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

3.4关闭swap分区

swapoff –a
sed -i 's/.swap./#&/' /etc/fstab

3.5.启动docker,设置开机自启动

设置docker的cgroupdriver:
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}

systemctl start docker && systemctl enable docker

3.6修改kubelet配置文件,忽略某些参数

echo "KUBELET_EXTRA_ARGS="--fail-swap-on=false"" > /etc/sysconfig/kubelet

设置kubelet 开机自启动,并且启动kubelet
systemctl enable kubelet && systemctl start kubelet

3.7下载镜像

查看kubeadm需要使用的镜像:kubeadm config images list
由于无法科学上网,所以我们需要提前把镜像下载好,然后修改包名。或者使用kubeconfig指定阿里云的仓库下载镜像
k8s.gcr.io/kube-apiserver:v1.14.1
k8s.gcr.io/kube-controller-manager:v1.14.1
k8s.gcr.io/kube-scheduler:v1.14.1
k8s.gcr.io/kube-proxy:v1.14.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
下面的docker镜像可以直接下载,然后修改包名,但是比较繁琐,命令如下:
docker pull gcrxio/kube-apiserver:v1.14.1
docker pull gcrxio/kube-controller-manager:v1.14.1
docker pull gcrxio/kube-scheduler:v1.14.1
docker pull gcrxio/pause:3.1
docker pull gcrxio/etcd:3.3.10
docker pull gcrxio/kube-proxy:v1.14.1
docker pull coredns/coredns:1.3.1
修改镜像名称:
docker tag gcrxio/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag gcrxio/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag gcrxio/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag gcrxio/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag gcrxio/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag gcrxio/pause:3.1 k8s.gcr.io/pause:3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

这里使用kubeconfig.yaml配置文件来初始化集群,但是必须搭建好api的负载均衡

4.搭建api负载均衡(keepalived+haproxy方案)

yum方式下载keepalived+haproxy

编辑keepalived配置:vim /etc/keepalived/keepalived.conf

在master1上

global_defs {
   router_id lb01
}
vrrp_script check_haproxy{
script "/root/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.100.254/24 dev ens33 label ens33:vip
    }

    track_script {
     check_ haproxy
    }
}

vim /root/check_haproxy.sh

#!/bin/bash
while true
do
[ `netstat -lntup|grep haproxy |wc -l` -ne 1 ]&&systemctl stop keepalived.service
done

master2和master3上修改state为BACKUP 和priority的值为100、50即可

全部master启动keepalived:
systemctl start keepalived.service && systemctl enable keepalived.service

编辑haproxy配置文件(三台机器配置相同)

global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     20000
    user        haproxy
    group       haproxy
    daemon
    nbproc  2
    ulimit-n    65535  
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    balance roundrobin
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 20000

frontend k8s-api
    bind 192.168.100.254:8443
    mode tcp
    option tcplog
    tcp-request inspect-delay 5s
    tcp-request content accept if { req.ssl_hello_type 1 }
    default_backend k8s-api

backend k8s-api
    mode tcp
    option tcplog
    option tcp-check
    balance roundrobin
    default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
    server k8s-api-1 192.168.100.145:6443 check
    server k8s-api-2 192.168.100.146:6443 check
    server k8s-api-3 192.168.100.147:6443 check

配置haproxy的路由转发:vim /etc/sysctl.conf

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1

然后执行sysctl -p使其生效
启动haproxy:systemctl enable haproxy&&systemctl start haproxy
至此,负载均衡搭建完毕

5.搭建多master

首先我们需要保持服务器时间一致,执行:ntpdate 0.cn.pool.ntp.org 同步时间,或者使用自己的时间服务器,必须让服务器时间保持一致

搭建多主多etcd(此方法的etcd是以pod方式构建,若出现第三个etcd构建失败,我们可以使用外部etcd方案来创建集群,讲解完pod方式跑etcd,我会讲解外部etcd如何创建),我们需要一个配置文件:kubeadm-config.yaml,内容如下

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.1
controlPlaneEndpoint: "192.168.100.254:8443"
networking:
  podSubnet: 10.244.0.0/16
apiServer:
  certSANs:
  - 192.168.100.145
  - 192.168.100.146
  - 192.168.100.147
  - 192.168.100.254

controlPlaneEndpoint指定api的vip地址,apiServer.certSANs指定三个master和一个vip的ip地址

指定配置初始化:kubeadm init --config kubeadm-config.yaml
初始化完毕后出现了一些配置,这些配置要保留下来,我们需要接下来使用

#需要创建和拷贝的文件
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
#主节点需要加入集群时的配置
  kubeadm join 192.168.100.254:8443 --token 6dy980.sxyqhcgj1ll4ljgs \
    --discovery-token-ca-cert-hash sha256:c801532023dfcacdb4f7f9412c7f78f3cefd72f08825c50df1fb733e798cd6d5 \
    --experimental-control-plane 
#从节点加入集群时候需要的配置
    kubeadm join 192.168.100.254:8443 --token 6dy980.sxyqhcgj1ll4ljgs \
    --discovery-token-ca-cert-hash sha256:c801532023dfcacdb4f7f9412c7f78f3cefd72f08825c50df1fb733e798cd6d5 

我们第一台主节点搭建完完毕后,需要拷贝证书到剩下的两台master上。在master2和master3创建目录:mkdir -p /etc/kubernetes/pki/etcd/ ,然后拷贝文件

#################拷贝到master2###########################
  scp /etc/kubernetes/pki/ca.crt 192.168.100.146:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key 192.168.100.146:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key 192.168.100.146:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub 192.168.100.146:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt 192.168.100.146:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key 192.168.100.146:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt 192.168.100.146:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key 192.168.100.146:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf 192.168.100.146:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf 192.168.100.146:~/.kube/config
#################拷贝到master3###########################
  scp /etc/kubernetes/pki/ca.crt 192.168.100.147:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key 192.168.100.147:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key 192.168.100.147:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub 192.168.100.147:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt 192.168.100.147:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key 192.168.100.147:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt 192.168.100.147:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key 192.168.100.147:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf 192.168.100.147:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf 192.168.100.147:~/.kube/config

拷贝完毕后,在使用上面产生的配置使master2、master3加入主节点:
在master2和master3上创建

  kubeadm join 192.168.100.254:8443 --token 6dy980.sxyqhcgj1ll4ljgs \
    --discovery-token-ca-cert-hash sha256:c801532023dfcacdb4f7f9412c7f78f3cefd72f08825c50df1fb733e798cd6d5 \
    --experimental-control-plane

主节点加入完毕后,安装flannel网络

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

node节点加入集群:

    kubeadm join 192.168.100.254:8443 --token 6dy980.sxyqhcgj1ll4ljgs \
    --discovery-token-ca-cert-hash sha256:c801532023dfcacdb4f7f9412c7f78f3cefd72f08825c50df1fb733e798cd6d5

6.添加自动补全脚本到系统

echo "source <(kubectl completion bash)" >> ~/.bashrc

7使用外部etcd集群方式创建k8s集群

我们首先要在外部创建etcd集群

7.1环境准备

创建etcd目录:mkdir -p /opt/etcd/{cfg,bin,ssl,log}
下载etcd软件包:etcd-v3.3.10-linux-amd64.tar.gz并解压软件包

tar -zxvf etcd-v3.3.10-linux-amd64.tar.gz
cd etcd-v3.3.10-linux-amd64/
cp etcd etcdctl /opt/etcd/bin/

安装cfssl证书工具:

mkdir  -p /root/k8s/etcd-cert
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
cd /root/k8s/etcd-cert

创建脚本文件:vim etcd-cert.sh

cat > ca-config.json <<EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "www": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

cat > ca-csr.json <<EOF
{
    "CN": "etcd CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing"
        }
    ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

#-----------------------

cat > server-csr.json <<EOF
{
    "CN": "etcd",
    "hosts": [
    "192.168.100.145",
    "192.168.100.146",
    "192.168.100.147"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "L": "BeiJing",
            "ST": "BeiJing"
        }
    ]
}
EOF

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server 

注意文件的最后存在空格,否则会出现乱码
bash etcd-cert.sh创建证书

[root@localhost etcd-cert]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  etcd-cert.sh  server.csr  server-csr.json  server-key.pem  server.pem

最后把上面准备的证书移动到/opt/etcd/ssl下并拷贝到所有节点,至此基础环境准备完毕,下面开始安装etcd

[root@localhost etcd-cert]# cd  /root/k8s/etcd-cert
[root@localhost etcd-cert]# cp *.pem /opt/etcd/ssl/
cd /opt/etcd/ssl/  

创建etcd服务脚本:vim /root/script/etcd.sh

#!/bin/bash
# example: ./etcd.sh etcd01 192.168.1.10 etcd02=https://192.168.1.11:2380,etcd03=https://192.168.1.12:2380

ETCD_NAME=$1
ETCD_IP=$2
ETCD_CLUSTER=$3

WORK_DIR=/opt/etcd

cat <<EOF >$WORK_DIR/cfg/etcd
#[Member]
ETCD_NAME="${ETCD_NAME}"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://${ETCD_IP}:2380,${ETCD_CLUSTER}"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF

cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=${WORK_DIR}/cfg/etcd
ExecStart=${WORK_DIR}/bin/etcd \
--name=\${ETCD_NAME} \
--data-dir=\${ETCD_DATA_DIR} \
--listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=\${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=${WORK_DIR}/ssl/server.pem \
--key-file=${WORK_DIR}/ssl/server-key.pem \
--peer-cert-file=${WORK_DIR}/ssl/server.pem \
--peer-key-file=${WORK_DIR}/ssl/server-key.pem \
--trusted-ca-file=${WORK_DIR}/ssl/ca.pem \
--peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable etcd
systemctl restart etcd

创建生成服务的相关文件

bash /root/script/etcd.sh etcd01 192.168.100.145 etcd02=https://192.168.100.146:2380,etcd03=https://192.168.100.147:2380

将上面生成的所有文件拷贝到其它的 etcd节点

scp -r /opt/etcd/ 192.168.100.145:/opt/
scp -r /opt/etcd/ 192.168.100.146:/opt/
scp -r /usr/lib/systemd/system/etcd.service 192.168.100.145:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service 192.168.100.146:/usr/lib/systemd/system/

在其它节点上修改 /opt/etcd/cfg/etcd对应的etcd服务
vim /opt/etcd/cfg/etcd

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.100.146:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.100.146:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.100.146:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.100.146:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.100.145:2380,etcd02=https://192.168.100.146:2380,etcd03=https://192.168.100.147:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

etcd03也按照此方法修改配置
然后启动etcd

systemctl daemon-reload
systemctl start etcd
systemctl enable etcd

验证集群启动情况:

etcdctl --ca-file=/opt/etcd/ssl/ca.pem  --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem  --endpoints="https://192.168.100.145:2379,https://192.168.100.146:2379,https://192.168.100.147:2379"  cluster-health

至此,etcd集群搭建完毕,我们可以指定etcd集群来创建k8s集群了
kube-config.yaml配置文件如下

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.14.1
controlPlaneEndpoint: "192.168.100.254:8443"
networking:
  podSubnet: 10.244.0.0/16
apiServer:
  certSANs:
  - 192.168.100.145
  - 192.168.100.146
  - 192.168.100.147
  - 192.168.100.254
etcd:
  external:
    endpoints:
    - https://192.168.100.145:2379
    - https://192.168.100.146:2379
    - https://192.168.100.147:2379
    caFile: /opt/etcd/ssl/ca.pem
    certFile: /opt/etcd/ssl/server.pem
    keyFile: /opt/etcd/ssl/server-key.pem

此处添加了etcd集群地址以及证书文件位置,提供给api访问etcd,然后我们使用kubeadm init --config kube-config.yaml 来初始化集群,剩下步骤按照上面来进行,一个外部etcd方式安装k8s集群就搭建完毕了。

遇到的问题:

查看api的日志文件发现一直在报错,日志文件在/var/log/pods/kube-system_kube-apiserver-kube-masterxxxxxxxxxxxxxxxx/kube-apiserver/0.log

http: TLS handshake error from 192.168.100.145:41004: remote error: tls: bad certificate\n","stream":"std
err","time":"2019-04-28T03:15:25.700394664Z"}

一直不清楚这个问题是什么原因产生的,猜想是haproxy没有应用证书产生了此问题,希望各位大佬们看完我的文章可以在下面留言帮我解决下,或者新手想入门学习的遇到什么问题都可以在下面留言,我有时间就进行解答

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,445评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,889评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,047评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,760评论 1 276
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,745评论 5 367
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,638评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,011评论 3 398
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,669评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,923评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,655评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,740评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,406评论 4 320
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,995评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,961评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,197评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,023评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,483评论 2 342

推荐阅读更多精彩内容