deep_learning_month4_week2_Residual_Networks

deep_learning_month4_week2_Residual_Networks

标签: 机器学习深度学习

代码已上传github:
https://github.com/PerfectDemoT/my_deeplearning_homework


[TOC]

这是一个搭建一个50层的残差网络的个人参考笔记。
里面用到了大量的keras的封装函数,所以更加简洁,有利于快速实现需求。(另外,最后50层的残差网络用CPU训练起来是真的久,1000张图一次迭代可以弄几分钟,再一次体会到训练几周不是梦,不对,是自己的电脑根本跑不动大数据。。。)

另外最后的应用实例是用图像识别来看我们用手比划的到底是数字几(可以是0-5)

下面来看看残差网络的具体实现步骤:

1. 残差块概念

残差块

如上图所示,左边是普通网络,右边是残差网络(这就是个残差块)。可以看作这样:
$$a^{[l+2]} = g(z^{[l+2]} + a^{[l]})$$
并且约定:
$$z^{[l+r]} = w{[l+1]}a{l+1}+b^{[l+1]}$$
$$a^{[l+1]} = g(z^{[l+1]})$$
$$a^{l+2} = g(z^{[l+2]})$$
残差网络大概就是这么个意思,下面我们具体看看对于CNN,我们是如何具体操作的。

2. CNN的残差块构建

1. 先导入包

import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline

import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

2. identity block示意图以及实现

两层的identity block


identity block

三层的identity block


identity block

接下来做好准备,我们会用到大量的keras的封装函数,并且之前已经导入了,所以这里不需要带前缀什么的,直接用名字使用,所以不要觉得奇怪。

# GRADED FUNCTION: identity_block

def identity_block(X, f, filters, stage, block):
    """
    Implementation of the identity block as defined in Figure 3

    Arguments:
    X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
    f -- integer, specifying the shape of the middle CONV's window for the main path
    filters -- python list of integers, defining the number of filters in the CONV layers of the main path
    stage -- integer, used to name the layers, depending on their position in the network
    block -- string/character, used to name the layers, depending on their position in the network

    Returns:
    X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
    """

    # defining name basis
    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    # Retrieve Filters
    F1, F2, F3 = filters

    # Save the input value. You'll need this later to add back to the main path. 
    X_shortcut = X

    # First component of main path
    X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
    X = Activation('relu')(X)

    ### START CODE HERE ###

    # Second component of main path (≈3 lines)
    X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2b')(X)
    X = Activation('relu')(X)

    # Third component of main path (≈2 lines)
    X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid', name=conv_name_base + '2c', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base + '2c')(X)

    # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
    X = Add()([X, X_shortcut])
    X = Activation('relu')(X)

    ### END CODE HERE ###

    return X

可以看出来,这写的是对于三层的情况。并且为了区分,我们还意义对他们进行了编号(那个a,b,c就是)。

值得一提的是,如果对keras不熟悉的同学,这里的理解可能会遇到一些困难,可能需要恶补一下。。。

接下来我们输出检验一下

tf.reset_default_graph()

with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float", [3, 4, 4, 6])
    X = np.random.randn(3, 4, 4, 6)
    A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
    test.run(tf.global_variables_initializer())
    out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
    print("out = " + str(out[0][1][1][0]))

参考结果:

out = [ 0.94822997  0.          1.16101444  2.747859    0.          1.36677003]

3. convolutional block示意图及其实现

convolutional block

从图中可以看出,我们对shortcut路径,不是简单的相加了,而是将$a{[l]}$先卷积在BatchNorm。原因是为了解决残差块的输入以及输出维度不匹配的情况,即$z{[l+2]}$并不能直接加上$a^{[l]}$的情况。

下面看看代码实现,和之前的identity block没有大的区别,只是shortcut那里多了一项。

# GRADED FUNCTION: convolutional_block

def convolutional_block(X, f, filters, stage, block, s = 2):
    """
    Implementation of the convolutional block as defined in Figure 4

    Arguments:
    X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
    f -- integer, specifying the shape of the middle CONV's window for the main path
    filters -- python list of integers, defining the number of filters in the CONV layers of the main path
    stage -- integer, used to name the layers, depending on their position in the network
    block -- string/character, used to name the layers, depending on their position in the network
    s -- Integer, specifying the stride to be used

    Returns:
    X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
    """

    # defining name basis
    conv_name_base = 'res' + str(stage) + block + '_branch'
    bn_name_base = 'bn' + str(stage) + block + '_branch'

    # Retrieve Filters
    F1, F2, F3 = filters

    # Save the input value
    X_shortcut = X


    ##### MAIN PATH #####
    # First component of main path 
    X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
    X = Activation('relu')(X)

    ### START CODE HERE ###

    # Second component of main path (≈3 lines)
    X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same', name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X)
    X = Activation('relu')(X)

    # Third component of main path (≈2 lines)
    X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid', name=conv_name_base+'2c', kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X)

    ##### SHORTCUT PATH #### (≈2 lines)
    X_shortcut = Conv2D(filters=F3, kernel_size=(1,1), strides=(s, s), padding='valid', name=conv_name_base+'1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
    X_shortcut = BatchNormalization(axis=3, name=bn_name_base+'1')(X_shortcut)

    # Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
    X = Add()([X, X_shortcut])
    X = Activation('relu')(X)

    ### END CODE HERE ###

    return X

可以看出,除了47-50行的shortcut是另外增加的外,其他的和之前的identity block并无大的区别,所以理解了之前的,这个也不难理解。

现在测试看看

tf.reset_default_graph()

with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float", [3, 4, 4, 6])
    X = np.random.randn(3, 4, 4, 6)
    A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
    test.run(tf.global_variables_initializer())
    out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
#     print(len(out[0]))
#     print(out)
    print("out = " + str(out[0][1][1][0]))

结果为:

    [ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]

3. 开始搭建50层的残差网络

下面是示意图:


50层ResNets

图中的“ID BLOCK” 意思是“Identity block,” and “ID BLOCK x3” 然后你要将3个identity block叠加在一起。

贴出一段说明详情:

The details of this ResNet-50 model are:

  • Zero-padding pads the input with a pad of (3,3)
  • Stage 1:
  • The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is “conv1”.
  • BatchNorm is applied to the channels axis of the input.
  • MaxPooling uses a (3,3) window and a (2,2) stride.
  • Stage 2:
  • The convolutional block uses three set of filters of size [64,64,256], “f” is 3, “s” is 1 and the block is “a”.
  • The 2 identity blocks use three set of filters of size [64,64,256], “f” is 3 and the blocks are “b” and “c”.
  • Stage 3:
  • The convolutional block uses three set of filters of size [128,128,512], “f” is 3, “s” is 2 and the block is “a”.
  • The 3 identity blocks use three set of filters of size [128,128,512], “f” is 3 and the blocks are “b”, “c” and “d”.
  • Stage 4:
  • The convolutional block uses three set of filters of size [256, 256, 1024], “f” is 3, “s” is 2 and the block is “a”.
  • The 5 identity blocks use three set of filters of size [256, 256, 1024], “f” is 3 and the blocks are “b”, “c”, “d”, “e” and “f”.
  • Stage 5:
  • The convolutional block uses three set of filters of size [512, 512, 2048], “f” is 3, “s” is 2 and the block is “a”.
  • The 2 identity blocks use three set of filters of size [256, 256, 2048], “f” is 3 and the blocks are “b” and “c”.
  • The 2D Average Pooling uses a window of shape (2,2) and its name is “avg_pool”.
  • The flatten doesn’t have any hyperparameters or name.
  • The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be 'fc' + str(classes).

上面这段说明这其实就是指出了里面的一些超参数的设定。
下面看看代码:

def ResNet50(input_shape = (64, 64, 3), classes = 6):
    """
    Implementation of the popular ResNet50 the following architecture:
    CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
    -> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER

    Arguments:
    input_shape -- shape of the images of the dataset
    classes -- integer, number of classes

    Returns:
    model -- a Model() instance in Keras
    """

    # Define the input as a tensor with shape input_shape
    X_input = Input(input_shape)


    # Zero-Padding
    X = ZeroPadding2D((3, 3))(X_input)

    # Stage 1
    X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
    X = Activation('relu')(X)
    X = MaxPooling2D((3, 3), strides=(2, 2))(X)

    # Stage 2
    X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
    X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
    X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')

    ### START CODE HERE ###

    # helper functions
    # convolutional_block(X, f, filters, stage, block, s = 2)
    # identity_block(X, f, filters, stage, block)

    # Stage 3 (≈4 lines)
    X = convolutional_block(X, f=3, filters=[128, 128, 512], stage=3, block='a', s=2)
    X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='b')
    X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='c')
    X = identity_block(X, f=3, filters=[128, 128, 512], stage=3, block='d')

    # Stage 4 (≈6 lines)
    X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='b')
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='c')
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='d')
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='e')
    X = identity_block(X, f=3, filters=[256, 256, 1024], stage=4, block='f')

    # Stage 5 (≈3 lines)
    X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)
    X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block='b')
    X = identity_block(X, f=3, filters=[512, 512, 2048], stage=5, block='c')

    # AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
    X = AveragePooling2D((2,2), name='avg_pool')(X)

    ### END CODE HERE ###

    # output layer
    X = Flatten()(X)
    X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)


    # Create model
    model = Model(inputs = X_input, outputs = X, name='ResNet50')

    return model

可以看到,这里用到了之前写的:
convolutional_block(X, f, filters, stage, block, s = 2)
identity_block(X, f, filters, stage, block)
这样就搭建好了一个50层的残差网络。
同样需要说明的是Flatten() , Dense()等等函数都是keras里的,在前面已经导入了,所以这里可以直接用。

4. 利用上面的搭建完成训练

调用一下语句来调用上面写的函数

model = ResNet50(input_shape = (64, 64, 3), classes = 6)

接着:
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

导入数据,并看看数据规模

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.

# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T

print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))

看看数据规模

number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)

然后,开始训练啦:

model.fit(X_train, Y_train, epochs = 2, batch_size = 32)

然后你就发现训练及其缓慢(当然,我指的是CPU的,用GPU的大佬请忽略我这句话。。。),看看我的训练的图就知道了


一千多张图,一次迭代就要两分多钟,,,所以,如果大家想检验的话还是,要不用GPU,要不可以直接用大佬训练好的.h5文件。

下面贴出一下大佬用GPU训练好的.h5文件的链接:

resnet50_20_epochs.h5 链接:https://pan.baidu.com/s/1eROf3BO 密码:qed2
resnet50_30_epochs.h5 链接:https://pan.baidu.com/s/1o8kPNUM 密码:tqio
resnet50_44_epochs.h5 链接:https://pan.baidu.com/s/1c1N3AzI 密码:2xwu
resnet50_55_epochs.h5 链接:https://pan.baidu.com/s/1bpfMA0v 密码:cxcv
Coursera上提供的模型文件:
ResNet50.h5 链接:链接:https://pan.baidu.com/s/1boCG2Iz 密码:sefq

在此感谢这位大佬,贴出他的博客链接
https://blog.csdn.net/hongbin_xu/article/details/78766642

用他的训练数据我们可以得到以下准确度:

model = load_model('ResNet50.h5') 
model = load_model('resnet50_44_epochs.h5') 

preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
120/120 [==============================] - 9s 78ms/step
Loss = 0.0914498666922
Test Accuracy = 0.958333337307

可以打印出网络结构

model.summary()

可以测试一下自己的图片

img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))

5. 训练过程中的代码

虽然CPU训练一个这个例子要很久,但是我们也可以来看看过程:

我们将epoch定为20。

model = ResNet50(input_shape = (64, 64, 3), classes = 6)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit(X_train, Y_train, epochs = 20, batch_size = 32)
model.save('resnet50_20_epochs.h5')
preds = model1.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))

然后我们可以看到:

Epoch 1/20
1080/1080 [==============================] - 15s 14ms/step - loss: 2.5141 - acc: 0.4241
Epoch 2/20
1080/1080 [==============================] - 5s 5ms/step - loss: 1.7727 - acc: 0.6194
Epoch 3/20
1080/1080 [==============================] - 6s 5ms/step - loss: 1.4935 - acc: 0.6769
Epoch 4/20
1080/1080 [==============================] - 5s 5ms/step - loss: 1.5494 - acc: 0.5833
Epoch 5/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.6902 - acc: 0.7889
Epoch 6/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.4155 - acc: 0.8593
Epoch 7/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.2782 - acc: 0.9139
Epoch 8/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.1665 - acc: 0.9500
Epoch 9/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.2578 - acc: 0.9185
Epoch 10/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.1690 - acc: 0.9435
Epoch 11/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0913 - acc: 0.9694
Epoch 12/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.1389 - acc: 0.9602
Epoch 13/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.1490 - acc: 0.9444
Epoch 14/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.1044 - acc: 0.9694
Epoch 15/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0435 - acc: 0.9861
Epoch 16/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0324 - acc: 0.9926
Epoch 17/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0190 - acc: 0.9926
Epoch 18/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0577 - acc: 0.9824
Epoch 19/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0268 - acc: 0.9907
Epoch 20/20
1080/1080 [==============================] - 5s 5ms/step - loss: 0.0662 - acc: 0.9787
120/120 [==============================] - 2s 17ms/step
Loss = 0.825686124961
Test Accuracy = 0.833333333333
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 200,527评论 5 470
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 84,314评论 2 377
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 147,535评论 0 332
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,006评论 1 272
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 62,961评论 5 360
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,220评论 1 277
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,664评论 3 392
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,351评论 0 254
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,481评论 1 294
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,397评论 2 317
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,443评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,123评论 3 315
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,713评论 3 303
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,801评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,010评论 1 255
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 42,494评论 2 346
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,075评论 2 341

推荐阅读更多精彩内容