Scrapy-Redis分布式抓取麦田二手房租房信息与数据分析

试着通过抓取一家房产公司的全部信息,研究下北京的房价。文章最后用Pandas进行了分析,并给出了数据可视化。


准备工作

麦田房产二手房页面(http://bj.maitian.cn/esfall/PG1)。

麦田房产租房页面(http://bj.maitian.cn/zfall/PG1)。

用Scrapy shell验证二手房XPath表达式

scrapy shell "http://bj.maitian.cn/esfall/PG1"

title
response.xpath('//div[@class="list_title"]/h1/a/text()').extract()

totalprice
response.xpath('//div[@class="list_title"]/div[@class="the_price"]/ol/strong/span/text()').extract()_first().replace('万元','')

unitprice
response.xpath('//div[@class="list_title"]/div[@class="the_price"]/ol/text()').extract()_first().replace('元/㎡','')

area 
response.xpath('//div[@class="list_title"]/p/span/text()').extract_first()

district
response.xpath('//div[@class="list_title"]/p/text()').re(r'昌平|朝阳|东城|大兴|丰台|海淀|石景山|顺义|通州|西城')

region
response.xpath('//div[@class="list_title"]/p/text()').re(r'[\u4e00-\u9fa5][\u4e00-\u9fa5]')[1]

next page
response.xpath('//div[@id="paging"]/a[@class="down_page"]/@href').extract_first()

租房XPath表达式

scrapy shell "http://bj.maitian.cn/zfall/PG1"

title
response.xpath('//div[@class="list_title"]/h1/a/text()').extract_first().strip().replace('\r\n\r\n','')

price
response.xpath('//div[@class="list_title"]/div[@class="the_price"]/ol/strong/span/text()').extract()

area
response.xpath('//div[@class="list_title"]/p/span/text()').extract_first().replace('㎡','')

district
response.xpath('//div[@class="list_title"]/p[@class="house_hot"]/span/text()').re(r'昌平|朝阳|东城|大兴|丰台|海淀|石景山|顺义|通州|西城')[0]

next page:
response.xpath('//div[@id="paging"]/a[@class="down_page"]/@href').extract_first()

租房爬虫

租房的信息比较少,用一般的Scrapy就行。
在目标文件夹中运行:

scrapy startproject maitian1

先在items.py文件中定义items:

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

import scrapy

class MaitianItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    title = scrapy.Field()
    price = scrapy.Field()
    area = scrapy.Field()
    district = scrapy.Field()
    pass

根据上文构造的XPath表达式,写一个租房页面的爬虫zufang_spider.py

import scrapy
from maitian.items import MaitianItem

class MaitianSpider(scrapy.Spider):
    name = "zufang"
    start_urls = ['http://bj.maitian.cn/zfall/PG1']
    def parse(self, response):
        for quote in response.xpath('//div[@class="list_title"]'):
            yield {
                'title': quote.xpath('./h1/a/text()').extract_first().strip().replace('\r\n\r\n',''),
                'price': quote.xpath('./div[@class="the_price"]/ol/strong/span/text()').extract_first(),
                'area': quote.xpath('./p/span/text()').extract_first().replace('㎡',''),
                'district': quote.xpath('./p[@class="house_hot"]/span/text()').re(r'昌平|朝阳|东城|大兴|丰台|海淀|石景山|顺义|通州|西城')[0],
            }

        next_page_url = response.xpath('//div[@id="paging"]/a[@class="down_page"]/@href').extract_first()
        if next_page_url is not None:
            yield scrapy.Request(response.urljoin(next_page_url))

数据库选的是MongoDB,需要使用pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html

import pymongo

from  scrapy.conf import settings
class MaitianPipeline(object):
  def __init__(self):    
    host = settings['MONGODB_HOST']
    port = settings['MONGODB_PORT']
    db_name = settings['MONGODB_DBNAME']
    client = pymongo.MongoClient(host=host, port=port)
    db = client[db_name]
    self.post = db[settings['MONGODB_DOCNAME']]

  def process_item(self, item, spider):
     zufang = dict(item)
     self.post.insert(zufang)
     return item

settings.py打开pipeline,新的settings.py如下:

# -*- coding: utf-8 -*-

# Scrapy settings for maitian project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'maitian'

SPIDER_MODULES = ['maitian.spiders']
NEWSPIDER_MODULE = 'maitian.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'maitian (+http://www.yourdomain.com)'

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    'maitian.middlewares.MaitianSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    'maitian.middlewares.MyCustomDownloaderMiddleware': 543,
#}

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    'maitian.pipelines.MaitianPipeline': 300,
#}

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

ITEM_PIPELINES = {'maitian.pipelines.MaitianPipeline': 300,}  

MONGODB_HOST = '127.0.0.1'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'maitian1'
MONGODB_DOCNAME = 'zufang'  

maitian1文件夹目录下运行爬虫,同时用命令行参数,将结果保存到zf.json文件,便于之后用Pandas进行分析,别忘了用命令mongod打开MongoDB数据库:

scrapy crawl zufang -o zf.json

会看到生成一个zf.json文件。用mongo命令打开MongoDB终端,然后输入以下命令查看结果:

use maitian1
db.zufang.find()

可以看到


二手房分布式爬虫

二手房信息较多,使用Scrapy-Redis。使用一台Linux作为Redis请求服务器和MongoDB数据库,两台Windows作为爬虫节点。
安装Scrapy-Redis:

pip install scrapy-redis

items.py文件不用做修改。

pipelines.py文件也不用进行修改。

和前面的租房爬虫XPath表达式不同,二手房的爬虫文件ershoufang_spider.py是:

import scrapy
from maitian.items import MaitianItem
from scrapy_redis.spiders import RedisSpider
import redis

r = redis.Redis(host='192.168.0.7', port=6379,db=0)

class MaitianSpider(RedisSpider):
    name = "ershoufang"
    # start_urls = ['http://bj.maitian.cn/esfall/PG1']
    redis_key = 'maitianspider:start_urls'

    def parse(self, response):
        for quote in response.xpath('//div[@class="list_title"]'):
            yield {
                'title': quote.xpath('./h1/a/text()').extract(),
                'totalprice': quote.xpath('./div[@class="the_price"]/ol/strong/span/text()').extract_first().replace('万元',''),
                'unitprice': quote.xpath('./div[@class="the_price"]/ol/text()').extract_first().replace('元/㎡',''),
                'area': quote.xpath('./p/span/text()').extract_first(),
                'district': quote.xpath('./p/text()').re(r'昌平|朝阳|东城|大兴|丰台|海淀|石景山|顺义|通州|西城')[0],
            }

        next_page_url = response.xpath('//div[@id="paging"]/a[@class="down_page"]/@href').extract_first()
        if next_page_url is not None:
            # yield scrapy.Request(response.urljoin(next_page_url))
            true_next_url = 'http://bj.maitian.cn' + next_page_url
            r.lpush('maitianspider:start_urls', true_next_url)

可以看到通过from scrapy_redis.spiders import RedisSpider和新的爬虫类class MaitianSpider(RedisSpider):,选用了scrapy-redis的爬虫。

Scrapy-Redis的核心是使用一个公共的Redis数据库作为请求服务器。它在GitHub的地址是https://github.com/rmax/scrapy-redis

Scrapy-Redis最重要的是它的设置:

# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

# Ensure all spiders share same duplicates filter through redis.
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

# Default requests serializer is pickle, but it can be changed to any module
# with loads and dumps functions. Note that pickle is not compatible between
# python versions.
# Caveat: In python 3.x, the serializer must return strings keys and support
# bytes as values. Because of this reason the json or msgpack module will not
# work by default. In python 2.x there is no such issue and you can use
# 'json' or 'msgpack' as serializers.
#SCHEDULER_SERIALIZER = "scrapy_redis.picklecompat"

# Don't cleanup redis queues, allows to pause/resume crawls.
#SCHEDULER_PERSIST = True

# Schedule requests using a priority queue. (default)
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'

# Alternative queues.
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.FifoQueue'
#SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.LifoQueue'

# Max idle time to prevent the spider from being closed when distributed crawling.
# This only works if queue class is SpiderQueue or SpiderStack,
# and may also block the same time when your spider start at the first time (because the queue is empty).
#SCHEDULER_IDLE_BEFORE_CLOSE = 10

# Store scraped item in redis for post-processing.
ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 300
}

# The item pipeline serializes and stores the items in this redis key.
#REDIS_ITEMS_KEY = '%(spider)s:items'

# The items serializer is by default ScrapyJSONEncoder. You can use any
# importable path to a callable object.
#REDIS_ITEMS_SERIALIZER = 'json.dumps'

# Specify the host and port to use when connecting to Redis (optional).
#REDIS_HOST = 'localhost'
#REDIS_PORT = 6379

# Specify the full Redis URL for connecting (optional).
# If set, this takes precedence over the REDIS_HOST and REDIS_PORT settings.
#REDIS_URL = 'redis://user:pass@hostname:9001'

# Custom redis client parameters (i.e.: socket timeout, etc.)
#REDIS_PARAMS  = {}
# Use custom redis client class.
#REDIS_PARAMS['redis_cls'] = 'myproject.RedisClient'

# If True, it uses redis' ``SPOP`` operation. You have to use the ``SADD``
# command to add URLs to the redis queue. This could be useful if you
# want to avoid duplicates in your start urls list and the order of
# processing does not matter.
#REDIS_START_URLS_AS_SET = False

# Default start urls key for RedisSpider and RedisCrawlSpider.
#REDIS_START_URLS_KEY = '%(name)s:start_urls'

# Use other encoding than utf-8 for redis.
#REDIS_ENCODING = 'latin1'

设置项很多,我们用到的是:

#使用Scrapy-Redis的调度器
SCHEDULER = "scrapy_redis.scheduler.Scheduler"

#利用Redis的集合实现去重
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

#允许继续爬取
SCHEDULER_PERSIST = True

#设置优先级
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'

#两个管道,第一个是负责存储到MongoDB
ITEM_PIPELINES = {
    'maitian.pipelines.MaitianPipeline': 300,
    'scrapy_redis.pipelines.RedisPipeline': 400,    
}

#Redis数据库的地址
REDIS_URL = 'redis://root:@192.168.0.7:6379'
#MongoDB数据库的地址
MONGODB_HOST = '192.168.0.7'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'maitian'
MONGODB_DOCNAME = 'ershoufang' 

结合之前的租房爬虫,新的settings.py文件是:

# -*- coding: utf-8 -*-

# Scrapy settings for maitian project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'maitian'

SPIDER_MODULES = ['maitian.spiders']
NEWSPIDER_MODULE = 'maitian.spiders'

SCHEDULER = "scrapy_redis.scheduler.Scheduler"

DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"

SCHEDULER_PERSIST = True

SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderPriorityQueue'

ITEM_PIPELINES = {
    'maitian.pipelines.MaitianPipeline': 300,
    'scrapy_redis.pipelines.RedisPipeline': 400,    
}

REDIS_URL = 'redis://root:@192.168.0.7:6379'

MONGODB_HOST = '192.168.0.7'
MONGODB_PORT = 27017
MONGODB_DBNAME = 'maitian'
MONGODB_DOCNAME = 'ershoufang' 

运行爬虫之前不要忘了打开Linux中的Redis和MongoDB的远程访问。

分别在两台爬虫节点的根目录下运行爬虫:

scrapy crawl ershoufang

这时可以看到:

显示正在监听。

这时需要把redis_key = 'maitianspider:start_urls'插入到Redis数据库中,命令如下:

lpush maitianspider:start_urls http://bj.maitian.cn/esfall/PG1

有了这个种子URL,两个爬虫就可运行起来了。


数据分析及可视化

使用的是Pandas和Matplotlib。

打印出出售(出租)情况的基本信息:

import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt

from pylab import *  
mpl.rcParams['font.sans-serif'] = ['SimHei']

#基本信息
df = pd.read_json("esf.json")
print(df.describe())
#分别统计个数
print(df["district"].value_counts())

生成饼图,查看每个区的占比情况:

import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt

from pylab import *  
mpl.rcParams['font.sans-serif'] = ['SimHei']


labels = '朝阳', '海淀', '昌平', '东城', '大兴', '西城', '丰台', '石景山', '通州'
sizes = [4534, 1612, 540, 530, 376, 155, 105, 74, 1]
explode = (0.1, 0, 0, 0,0,0,0,0,0)  
plt.subplot(121)
plt.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
        shadow=True, startangle=-90)
plt.axis('equal')  
plt.title("房屋出售分布")


labels = '朝阳', '海淀', '东城', '西城', '昌平', '石景山', '大兴', '丰台'
sizes = [898, 350, 109, 60, 42, 25, 17, 8]
explode = (0.1, 0, 0, 0,0,0,0,0)  
plt.subplot(122)
plt.pie(sizes, explode=explode, labels=labels, autopct='%1.1f%%',
        shadow=True, startangle=-90)
plt.axis('equal')  
plt.title("房屋出租分布")

plt.show()

在二手房销售市场上,总共有房7927套。朝阳区有4534套,占比接近60%。海淀区有1612套,占比20%。二者之和,占到整个销售市场的四分之三以上。
一共有1509套房在整租,其中朝阳区898套,占比仍然近60%,海淀区有350套,占比20%+。二者总共依然占到租赁市场的四分之三以上。

最关心的还是北京房价的分布区间:

import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt

from pylab import *  
mpl.rcParams['font.sans-serif'] = ['SimHei']

df = pd.read_json("esf.json")

unitprice_values = df.unitprice
plt.hist(unitprice_values,bins=10)
plt.title(u"房屋出售每平米价格分布")
plt.xlabel(u'价格(单位:万/平方米)')
plt.ylabel(u'套数')
plt.show() 

房屋单价分布在2万每平米到18万每平米之间。大部分处于6万到12万的区间之内。其中,8万每平米的房屋,在市场最多。所有房屋的平均价格是83860万每平米。

售租比是衡量房屋出售与出租关系的指标之一,售租比越低,说明房屋每平米的出租收益越大,越具有购买价值。房屋的售价租金比是用来判断某一区域房产是否存在价值泡沫的一个衡量指标,也是用来研判某一区域是否具有投资价值的普遍标准。国际上用来衡量一个区域房产运行状况良好的售价租金比一般界定为200:1至300:1。

出租市场需要考虑每平米的每月租金,可以利用现有的数据计算,再添加到DataFrame。

unitprice_values = df.price/df.area
df['unitprice']=unitprice_values

df.groupby("district").mean()可以显示出每个区的均值信息,整理如下:

根据这两张表,可以得到售租比与区的对应关系(因为租售房屋中没有通州,所以略过):
选用barh

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from pylab import *  
mpl.rcParams['font.sans-serif'] = ['SimHei']

district = ('西城', '石景山','东城','海淀','丰台','昌平','大兴','朝阳')

ratio = (935.3099343,895.6966335,857.8477132,852.2141772,836.9530937,748.4055373,738.6935962,702.9053925)

fig, ax = plt.subplots()
district = ('西城', '石景山','东城','海淀','丰台','昌平','大兴','朝阳')
y_pos = np.arange(len(district))
performance = ratio

ax.barh(y_pos, ratio, align='center', color='green', ecolor='black')
ax.set_yticks(y_pos)
ax.set_yticklabels(district)
ax.invert_yaxis()  
ax.set_xlabel('售租比(单位:月)')
ax.set_title('各区房屋售租比')

plt.show()

所有区的售租比均在700以上,也就是说房屋每平米的售价是月租金的700以上。售租比最低的朝阳区的值是703,相当于58年。这是因为朝阳区面积较大,且位于商贸中心,上班族租房需求大。换句话说,买到手的房子出租出去,需要58年才能回本。对比200到300的推荐值(大概20年回本),可见北京的房价售价十分之高,从投资的角度,回报率不大。当然了,根据售租比,(有钱的话)买房要买朝阳的,租房要租西城的。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 203,937评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 85,503评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 150,712评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,668评论 1 276
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,677评论 5 366
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,601评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 37,975评论 3 396
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,637评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 40,881评论 1 298
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,621评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,710评论 1 329
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,387评论 4 319
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 38,971评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,947评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,189评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 44,805评论 2 349
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,449评论 2 342

推荐阅读更多精彩内容