使用kubeadm搭建kubernetes:v1.10.0(国内环境)

kubeadm安装Kubernetes:v1.10文档

环境:

  • centos7

  • docker17.03

  • socat-1.7.3.2-2.el7.x86_64

  • kubelet-1.10.0-0.x86_64

  • kubernetes-cni-0.6.0-0.x86_64

  • kubectl-1.10.0-0.x86_64

  • kubeadm-1.10.0-0.x86_64

  • k8s-master 10.176.57.151,k8s-node1 10.176.57.152

一、环境初始化(所有节点)

1.设置主机名称(以master节点为例)

vim /etc/hostname

k8s-master

2.配置主机映射

vim /etc/hosts

127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4

::1        localhost localhost.localdomain localhost6 localhost6.localdomain6

10.176.57.151 k8s-master

10.176.57.152 k8s-node1

3.master上执行ssh免密码登陆配置

ssh-keygen  #一路回车即可

ssh-copy-id  k8s-node1

4.所有节点 关闭防火墙、关闭Swap、关闭Selinux、设置内核、K8S的yum源、安装依赖包、配置ntp(配置完后建议重启一次)

systemctl stop firewalld

systemctl disable firewalld

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce  0

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/sysconfig/selinux

sed -i "s/^SELINUX=permissive/SELINUX=disabled/g" /etc/selinux/config 

modprobe br_netfilter

cat <<EOF >  /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl -p /etc/sysctl.d/k8s.conf

ls /proc/sys/net/bridge

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

yum install -y epel-release

yum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools wget vim  ntpdate libseccomp libtool-ltdl

systemctl enable ntpdate.service

echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp

crontab /tmp/crontab2.tmp

systemctl start ntpdate.service

echo "* soft nofile 65536" >> /etc/security/limits.conf

echo "* hard nofile 65536" >> /etc/security/limits.conf

echo "* soft nproc 65536"  >> /etc/security/limits.conf

echo "* hard nproc 65536"  >> /etc/security/limits.conf

echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf

echo "* hard memlock  unlimited"  >> /etc/security/limits.conf

二、在master节点上安装、配置keepalived

1.安装keepalived

yum install -y keepalived

systemctl enable keepalived

2.配置keepalived.conf(可在keepalived文件夹下看到)

vim /etc/keepalived/keepalived.conf

global_defs {

  router_id LVS_k8s

}

vrrp_script CheckK8sMaster {

    script "curl -k https://10.176.57.200:6443"

    interval 3

    timeout 9

    fall 2

    rise 2

}

vrrp_instance VI_1 {

    state MASTER

    interface em1  #修改为自己的网卡名称

    virtual_router_id 61

    priority 100

    advert_int 1

    mcast_src_ip 10.176.57.151

    nopreempt

    authentication {

        auth_type PASS

        auth_pass sqP05dQgMSlzrxHj

    }

    virtual_ipaddress {

        10.176.57.200/24

    }

    track_script {

        CheckK8sMaster

    }

}

如果node节点有多个

global_defs {

  router_id LVS_k8s

}

vrrp_script CheckK8sMaster {

    script "curl -k https://10.176.57.200:6443"

    interval 3

    timeout 9

    fall 2

    rise 2

}

vrrp_instance VI_1 {

    state MASTER

    interface ens160

    virtual_router_id 61

    priority 100

    advert_int 1

    mcast_src_ip 10.176.57.151

    nopreempt

    authentication {

        auth_type PASS

        auth_pass sqP05dQgMSlzrxHj

    }

    unicast_peer {

        10.176.57.??

        10.176.57.??

    }

    virtual_ipaddress {

        10.176.57.200/24

    }

    track_script {

        CheckK8sMaster

    }

}

3.启动keepalived

systemctl restart keepalived

此时可看到VIP已经绑定到master上

ip addr

em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 78:2b:cb:24:13:9f brd ff:ff:ff:ff:ff:ff

    inet 10.176.57.151/25 brd 10.176.57.255 scope global noprefixroute em1

      valid_lft forever preferred_lft forever

    inet 10.176.57.200/24 scope global em1

      valid_lft forever preferred_lft forever

    inet6 fe80::d5d0:fb0f:12c7:b720/64 scope link noprefixroute

      valid_lft forever preferred_lft forever

三、创建etcd证书(master节点)

1.设置cfssl环境

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

chmod +x cfssljson_linux-amd64

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

chmod +x cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

export PATH=/usr/local/bin:$PATH

2.创建CA文件

mkdir /root/ssl

cd /root/ssl

vim  ca-config.json

{

"signing": {

"default": {

  "expiry": "8760h"

},

"profiles": {

  "kubernetes-Soulmate": {

    "usages": [

        "signing",

        "key encipherment",

        "server auth",

        "client auth"

    ],

    "expiry": "8760h"

  }

}

}

}

vim ca-csr.json

{

"CN": "kubernetes-Soulmate",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

  "C": "CN",

  "ST": "shanghai",

  "L": "shanghai",

  "O": "k8s",

  "OU": "System"

}

]

}

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

vim etcd-csr.json

{

  "CN": "etcd",

  "hosts": [

    "127.0.0.1",

    "10.176.57.151"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "shanghai",

      "L": "shanghai",

      "O": "k8s",

      "OU": "System"

    }

  ]

}

cfssl gencert -ca=ca.pem \

  -ca-key=ca-key.pem \

  -config=ca-config.json \

  -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

mkdir -p /etc/etcd/ssl

cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/

四、安装配置etcd(master节点)

1.安装etcd

yum install etcd -y

mkdir -p /var/lib/etcd

2.编辑etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

ExecStart=/usr/bin/etcd  --name k8s-master.ncs.corp.int-ads  --cert-file=/etc/etcd/ssl/etcd.pem  --key-file=/etc/etcd/ssl/etcd-key.pem  --peer-cert-file=/etc/etcd/ssl/etcd.pem  --peer-key-file=/etc/etcd/ssl/etcd-key.pem  --trusted-ca-file=/etc/etcd/ssl/ca.pem  --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem  --initial-advertise-peer-urls https://10.176.57.151:2380  --listen-peer-urls https://10.176.57.151:2380  --listen-client-urls https://10.176.57.151:2379,http://127.0.0.1:2379  --advertise-client-urls https://10.176.57.151:2379  --initial-cluster-token etcd-cluster-0  --initial-cluster k8s-master.ncs.corp.int-ads=https://10.176.57.151:2380  --initial-cluster-state new  --data-dir=/var/lib/etcd

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

3.添加自启动

mv etcd.service /usr/lib/systemd/system/

systemctl daemon-reload

systemctl enable etcd

systemctl start etcd

systemctl status etcd

4.检查

etcdctl --endpoints=https://10.176.57.151:2379 \

  --ca-file=/etc/etcd/ssl/ca.pem \

  --cert-file=/etc/etcd/ssl/etcd.pem \

  --key-file=/etc/etcd/ssl/etcd-key.pem  cluster-health

五、安装docker-ce(所有节点)

1.安装docker-ce(kubeadm目前支持docker最高版本是17.03.x)

yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm  -y

yum install https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm  -y

2.修改配置文件docker.service

ExecStart=/usr/bin/dockerd  -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock  --registry-mirror=https://ms3cfraz.mirror.aliyuncs.com

3.启动docker

systemctl daemon-reload

systemctl restart docker

systemctl enable docker

systemctl status docker

六、安装配置kubeadm

1.所有节点安装kubelet kubeadm kubectl(我用的本地安装)

https://pan.baidu.com/s/1XxaOKGdyazKSvSOeOXdkpw 密码:txod

tar -xvf kube-packages-1.10.1.tar

cd kube-packages-1.10.1

rpm -Uvh * 或者 yum install local *.rpm  进行安装

2.所有节点修改kubelet配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs" #修改这一行

Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0" #添加这一行

3.重新加载配置(所有节点)

systemctl daemon-reload

systemctl enable kubelet

4.命令补全(所有节点)

yum install -y bash-completion

source /usr/share/bash-completion/bash_completion

source <(kubectl completion bash)

echo "source <(kubectl completion bash)" >> ~/.bashrc

七、初始化集群

1.添加初始配置文件(master节点)

vim config.yaml

apiVersion: kubeadm.k8s.io/v1alpha1

kind: MasterConfiguration

etcd:

  endpoints:

  - https://10.176.57.151:2379

  caFile: /etc/etcd/ssl/ca.pem

  certFile: /etc/etcd/ssl/etcd.pem

  keyFile: /etc/etcd/ssl/etcd-key.pem

  dataDir: /var/lib/etcd

networking:

  podSubnet: 10.244.0.0/16

kubernetesVersion: 1.10.0

api:

  advertiseAddress: "10.176.57.200"

token: "b99a00.a144ef80536d4344"

tokenTTL: "0s"

apiServerCertSANs:

- k8s-master.ncs.corp.int-ads

- 10.176.57.151

- 10.176.57.152

- 10.176.57.200

featureGates:

  CoreDNS: true

imageRepository: "registry.cn-hangzhou.aliyuncs.com/k8sth"

2.初始化集群

kubeadm init --config config.yaml

#### 初始化失败的处理方法

`kubeadm reset`

#或

rm -rf /etc/kubernetes/*.conf

rm -rf /etc/kubernetes/manifests/*.yaml

docker ps -a |awk '{print $1}' |xargs docker rm -f

systemctl  stop kubelet

正常初始化结果如下


Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 192.168.150.186:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:020aa06e81273734ba108042283b04d40b6fb2884a3496dc24a05a526912b030

3.执行如下命令(master)

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.部署flannel网络(master节点)

文件放在app目录下

kubectl create -f  kube-flannel.yml

5.部署dashboard

kubectl create -f kubernetes-dashboard.yaml

获取token,通过令牌登录

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

访问dashboard

https://10.176.57.151:30000/#!/login

输入token即可登录

6.安装heapster

[root@master ~]# tree kube-heapster/

kube-heapster/

├── influxdb

│  ├── grafana.yaml

│  ├── heapster.yaml

│  └── influxdb.yaml

└── rbac

    └── heapster-rbac.yaml

[root@master ~]# kubectl create -f kube-heapster/influxdb/

[root@master ~]#  kubectl create -f kube-heapster/rbac/

访问https://10.176.57.151:30000/#!/login即可看到监控信息

7.让master也运行pod

kubectl taint nodes --all node-role.kubernetes.io/master-

八、添加节点到集群

在node01上执行

kubeadm join 10.176.57.200:6443 --token b99a00.a144ef80536d4344 --discovery-token-ca-cert-hash sha256:020aa06e81273734ba108042283b04d40b6fb2884a3496dc24a05a526912b030

查看nodes和pods

[root@master ~]# kubectl get nodes

NAME                          STATUS    ROLES    AGE      VERSION

k8s-master.ncs.corp.int-ads  Ready    master    1d        v1.10.1

k8s-node1.ncs.corp.int-ads    Ready    <none>    18h      v1.10.1

[root@k8s-master init.record]# kubectl get pods --all-namespaces -o wide

NAMESPACE    NAME                                                  READY    STATUS    RESTARTS  AGE      IP              NODE

kube-system  coredns-7997f8864c-kgkhq                              1/1      Running  0          1d        10.244.0.27    k8s-master

kube-system  coredns-7997f8864c-ktbc4                              1/1      Running  0          1d        10.244.0.28    k8s-master

kube-system  heapster-647b89cd4b-5hgmk                            1/1      Running  0          1d        10.244.0.32    k8s-master

kube-system  kube-apiserver-k8s-master                            1/1      Running  0          1d        10.176.57.151  k8s-master

kube-system  kube-controller-manager-k8s                          1/1      Running  0          1d        10.176.57.151  k8s-master

kube-system  kube-flannel-ds-amd64-j77jq                          1/1      Running  0          1h        10.176.57.152  k8s-node1

kube-system  kube-flannel-ds-amd64-qwmkf                          1/1      Running  0          1d        10.176.57.151  k8s-master

kube-system  kube-proxy-775m6                                      1/1      Running  0          1h        10.176.57.152  k8s-node1

kube-system  kube-proxy-m2dqx                                      1/1      Running  0          1d        10.176.57.151  k8s-master

kube-system  kube-scheduler-k8s-master                             1/1      Running  0          1d        10.176.57.151  k8s-master

kube-system  kubernetes-dashboard-7b44ff9b77-dqc8h                1/1      Running  0          1d        10.244.0.29    k8s-master

kube-system  monitoring-grafana-74bdd98b7d-bm6fc                  1/1      Running  0          1d        10.244.0.30    k8s-master

kube-system  monitoring-influxdb-55bbd4b96-8x8cx                  1/1      Running  0          1d        10.244.0.31    k8s-master

dashboard界面

image

九、安装EFK

官网给出的Cluster-level Logging参考架构

image

1.为需要运行fluentd的节点贴上标签

kubectl label node k8s-master.ncs.corp.int-ads beta.kubernetes.io/fluentd-ds-ready=true

2.部署efk

kubectl create -f efk/

3.查看EFK的pods是否运行

[root@k8s-master kube-packages-1.10.1]# kubectl get pods --all-namespaces -o wide

NAMESPACE    NAME                                                  READY    STATUS    RESTARTS  AGE      IP              NODE

kube-system  coredns-7997f8864c-kgkhq                              1/1      Running  0          2d        10.244.0.27    k8s-master

kube-system  coredns-7997f8864c-ktbc4                              1/1      Running  0          2d        10.244.0.28    k8s-master

kube-system  elasticsearch-logging-0                              1/1      Running  0          2d        10.244.0.34    k8s-master

kube-system  elasticsearch-logging-1                              1/1      Running  0          2d        10.244.0.35    k8s-master

kube-system  fluentd-es-v2.0.4-l49nv                              1/1      Running  0          2d        10.244.0.33    k8s-master

kube-system  grafana-core-f796895df-dcwgt                          1/1      Running  0          1d        10.244.0.44    k8s-master

kube-system  heapster-647b89cd4b-5hgmk                            1/1      Running  0          2d        10.244.0.32    k8s-master

kube-system  kibana-logging-7445dc9757-4j769                      1/1      Running  0          2d        10.244.0.36    k8s-master

kube-system  kube-apiserver-k8s-master                            1/1      Running  0          2d        10.176.57.151  k8s-master

kube-system  kube-controller-manager-k8s-master                    1/1      Running  0          2d        10.176.57.151  k8s-master

kube-system  kube-flannel-ds-amd64-j77jq                          1/1      Running  0          22h      10.176.57.152  k8s-node1

kube-system  kube-flannel-ds-amd64-qwmkf                          1/1      Running  0          2d        10.176.57.151  k8s-master

kube-system  kube-proxy-775m6                                      1/1      Running  0          22h      10.176.57.152  k8s-node1

kube-system  kube-proxy-m2dqx                                      1/1      Running  0          2d        10.176.57.151  k8s-master

kube-system  kube-scheduler-k8s-master                             1/1      Running  0          2d        10.176.57.151  k8s-master

kube-system  kubernetes-dashboard-7b44ff9b77-dqc8h                1/1      Running  0          2d        10.244.0.29    k8s-master

kube-system  monitoring-grafana-74bdd98b7d-bm6fc                  1/1      Running  0          2d        10.244.0.30    k8s-master

kube-system  monitoring-influxdb-55bbd4b96-8x8cx                  1/1      Running  0          2d        10.244.0.31    k8s-master

4.访问kibana

通过kubectl代理访问

kubectl proxy --address='10.176.57.151' --port=8888 --accept-hosts='^*$'

访问页面

http://10.176.57.151:8888/api/v1/namespaces/kube-system/services/kibana-logging/proxy/

kibana

image

十、部署prometheus

1.下载镜像

docker pull prom/node-exporter

docker pull prom/prometheus:v2.0.0

docker pull grafana/grafana:4.2.0

2.采用daemonset方式部署node-exporter组件

kubectl create -f node-exporter.yaml

3.部署prometheus组件

kubectl create -f  rbac-setup.yaml

kubectl create -f  configmap.yaml

kubectl create -f  prometheus.deploy.yml

kubectl create -f  prometheus.svc.yml

通过访问http://10.176.57.151:31672/metrics 可以看到对应的metrics

通过访问http://10.176.57.151:30003/targets 可以看到prometheus已经成功连接上了k8s的apiserver

可以在prometheus的WEB界面上提供了基本的查询K8S集群中每个POD的CPU使用情况,查询条件如下

sum by (pod_name)( rate(container_cpu_usage_seconds_total{image!="", pod_name!=""}[1m] ) )

prometheus

image

4.部署grafana组件

kubectl create -f grafana-deploy.yaml

kubectl create -f grafana-svc.yaml

kubectl create -f grafana-ing.yaml

5.访问grafana

访问网址(不清楚可通过kubectl get service -n kube-system查看):http://10.176.57.151:31042/

默认用户名和密码都是admin

image

配置数据源为prometheus

此处的url改为http://10.176.57.151:30003

[图片上传失败...(image-8ea0a1-1544163381932)]

导入面板

image

可以直接输入模板编号315在线导入,或者下载好对应的json模板文件本地导入,面板模板下载地址https://grafana.com/dashboards/315

监控数据展示

image

参考:

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,236评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,867评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,715评论 0 340
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,899评论 1 278
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,895评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,733评论 1 283
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,085评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,722评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 43,025评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,696评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,816评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,447评论 4 322
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,057评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,009评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,254评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,204评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,561评论 2 343

推荐阅读更多精彩内容