快速搭建 Hadoop 环境

对于Hadoop来说,最主要的是两个方面,一个是分布式文件系统HDFS,另一个是MapReduce计算模型,下面讲解下我在搭建Hadoop 环境过程。Hadoop 测试环境

共4台测试机,1台namenode 3台datanode OS版本:RHEL 5.5 X86_64 Hadoop:0.20.203.0 Jdk:jdk1.7.0 角色 ip地址 namenode 192.168.57.75 datanode1 192.168.57.76 datanode2 192.168.57.78 datanode3 192.168.57.79

一 部署 Hadoop 前的准备工作

1 需要知道hadoop依赖Java和SSH Java 1.5.x (以上),必须安装。 ssh 必须安装并且保证 sshd 一直运行,以便用Hadoop 脚本管理远端Hadoop守护进程。 2 建立 Hadoop 公共帐号 所有的节点应该具有相同的用户名,可以使用如下命令添加: useradd hadoop passwd hadoop 3 配置 host 主机名 tail -n 3 /etc/hosts 192.168.57.75 namenode 192.168.57.76 datanode1 192.168.57.78 datanode2 192.168.57.79 datanode3 4 以上几点要求所有节点(namenode|datanode)配置全部相同

二 ssh 配置

1 生成私匙 id_rsa 与 公匙 id_rsa.pub 配置文件

[hadoop@hadoop1 ~]$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/hadoop/.ssh/id_rsa.

Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.

The key fingerprint is:

d6:63:76:43:e2:5b:8e:85:ab:67:a2:7c:a6:8f:23:f9 hadoop@hadoop1.test.com

2 私匙 id_rsa 与 公匙 id_rsa.pub 配置文件

[hadoop@hadoop1 ~]$ ls .ssh/

authorized_keys id_rsa id_rsa.pub known_hosts

3 把公匙文件上传到datanode服务器

[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode1

28

hadoop@datanode1's password:

Now try logging into the machine, with "ssh 'hadoop@datanode1'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode2

28

hadoop@datanode2's password:

Now try logging into the machine, with "ssh 'hadoop@datanode2'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@datanode3

28

hadoop@datanode3's password:

Now try logging into the machine, with "ssh 'hadoop@datanode3'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

[hadoop@hadoop1 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@localhost

28

hadoop@localhost's password:

Now try logging into the machine, with "ssh 'hadoop@localhost'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

4 验证

[hadoop@hadoop1 ~]$ ssh datanode1

Last login: Thu Feb 2 09:01:16 2012 from 192.168.57.71

[hadoop@hadoop2 ~]$ exit

logout

[hadoop@hadoop1 ~]$ ssh datanode2

Last login: Thu Feb 2 09:01:18 2012 from 192.168.57.71

[hadoop@hadoop3 ~]$ exit

logout

[hadoop@hadoop1 ~]$ ssh datanode3

Last login: Thu Feb 2 09:01:20 2012 from 192.168.57.71

[hadoop@hadoop4 ~]$ exit

logout

[hadoop@hadoop1 ~]$ ssh localhost

Last login: Thu Feb 2 09:01:24 2012 from 192.168.57.71

[hadoop@hadoop1 ~]$ exit

logout

三 java环境配置

1 下载合适的jdk

//此文件为64Linux 系统使用的 RPM包

wget http://download.oracle.com/otn-pub/java/jdk/7/jdk-7-linux-x64.rpm

2 安装jdk

rpm -ivh jdk-7-linux-x64.rpm

3 验证java

[root@hadoop1 ~]# java -version

java version "1.7.0"

Java(TM) SE Runtime Environment (build 1.7.0-b147)

Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode)

[root@hadoop1 ~]# ls /usr/java/

default jdk1.7.0 latest

4 配置java环境变量

vim /etc/profile //在profile文件中加入如下信息:

add for hadoop

export JAVA_HOME=/usr/java/jdk1.7.0

export CLASSPATH=.:JAVA_HOME/jre/lib/rt.jar:JAVA_HOME/lib/dt.jar:$JAVA_HOME/

export PATH=PATH:JAVA_HOME/bin

//使环境变量生效

source /etc/profile

5 拷贝 /etc/profile 到 datanode

[root@hadoop1 src]# scp /etc/profile root@datanode1:/etc/

The authenticity of host 'datanode1 (192.168.57.86)' can't be established.

RSA key fingerprint is b5:00:d1:df:73:4c:94:f1:ea:1f:b5:cd:ed:3a:cc:e1.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'datanode1,192.168.57.86' (RSA) to the list of known hosts.

root@datanode1's password:

profile 100% 1624 1.6KB/s 00:00

[root@hadoop1 src]# scp /etc/profile root@datanode2:/etc/

The authenticity of host 'datanode2 (192.168.57.87)' can't be established.

RSA key fingerprint is 57:cf:96:15:78:a3:94:93:30:16:8e:66:47:cd:f9:cd.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'datanode2,192.168.57.87' (RSA) to the list of known hosts.

root@datanode2's password:

profile 100% 1624 1.6KB/s 00:00

[root@hadoop1 src]# scp /etc/profile root@datanode3:/etc/

The authenticity of host 'datanode3 (192.168.57.88)' can't be established.

RSA key fingerprint is 31:73:e8:3c:20:0c:1e:b2:59:5c:d1:01:4b:26:41:70.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'datanode3,192.168.57.88' (RSA) to the list of known hosts.

root@datanode3's password:

profile 100% 1624 1.6KB/s 00:00

6 拷贝 jdk 安装包,并在每个datanode 节点安装 jdk 包

[root@hadoop1 ~]# scp -r /home/hadoop/src/ hadoop@datanode1:/home/hadoop/

hadoop@datanode1's password:

hadoop-0.20.203.0rc1.tar.gz 100% 58MB 57.8MB/s 00:01

jdk-7-linux-x64.rpm 100% 78MB 77.9MB/s 00:01

[root@hadoop1 ~]# scp -r /home/hadoop/src/ hadoop@datanode2:/home/hadoop/

hadoop@datanode2's password:

hadoop-0.20.203.0rc1.tar.gz 100% 58MB 57.8MB/s 00:01

jdk-7-linux-x64.rpm 100% 78MB 77.9MB/s 00:01

[root@hadoop1 ~]# scp -r /home/hadoop/src/ hadoop@datanode3:/home/hadoop/

hadoop@datanode3's password:

hadoop-0.20.203.0rc1.tar.gz 100% 58MB 57.8MB/s 00:01

jdk-7-linux-x64.rpm 100% 78MB 77.9MB/s 00:01

四 hadoop 配置 //注意使用hadoop 用户 操作

1 配置目录

[hadoop@hadoop1 ~]$ pwd

/home/hadoop

[hadoop@hadoop1 ~]$ ll

total 59220

lrwxrwxrwx 1 hadoop hadoop 17 Feb 1 16:59 hadoop -> hadoop-0.20.203.0

drwxr-xr-x 12 hadoop hadoop 4096 Feb 1 17:31 hadoop-0.20.203.0

-rw-r--r-- 1 hadoop hadoop 60569605 Feb 1 14:24 hadoop-0.20.203.0rc1.tar.gz

2 配置hadoop-env.sh,指定java位置

vim hadoop/conf/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.7.0

3 配置core-site.xml //定位文件系统的 namenode

[hadoop@hadoop1 ~]$ cat hadoop/conf/core-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>fs.default.name</name>

<value>hdfs://namenode:9000</value>

</property>

</configuration>

4 配置mapred-site.xml //定位jobtracker 所在的主节点

[hadoop@hadoop1 ~]$ cat hadoop/conf/mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>mapred.job.tracker</name>

<value>namenode:9001</value>

</property>

</configuration>

5 配置hdfs-site.xml //配置HDFS副本数量

[hadoop@hadoop1 ~]$ cat hadoop/conf/hdfs-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

</configuration>

6 配置 master 与 slave 配置文档

[hadoop@hadoop1 ~]$ cat hadoop/conf/masters

namenode

[hadoop@hadoop1 ~]$ cat hadoop/conf/slaves

datanode1

datanode2

7 拷贝hadoop 目录到所有节点(datanode)

[hadoop@hadoop1 ~]$ scp -r hadoop hadoop@datanode1:/home/hadoop/

[hadoop@hadoop1 ~]$ scp -r hadoop hadoop@datanode2:/home/hadoop/

[hadoop@hadoop1 ~]$ scp -r hadoop hadoop@datanode3:/home/hadoop

8 格式化 HDFS

[hadoop@hadoop1 hadoop]$ bin/hadoop namenode -format

12/02/02 11:31:15 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG: host = hadoop1.test.com/127.0.0.1

STARTUP_MSG: args = [-format]

STARTUP_MSG: version = 0.20.203.0

STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011

************************************************************/

Re-format filesystem in /tmp/hadoop-hadoop/dfs/name ? (Y or N) Y //这里输入Y

12/02/02 11:31:17 INFO util.GSet: VM type = 64-bit

12/02/02 11:31:17 INFO util.GSet: 2% max memory = 19.33375 MB

12/02/02 11:31:17 INFO util.GSet: capacity = 2^21 = 2097152 entries

12/02/02 11:31:17 INFO util.GSet: recommended=2097152, actual=2097152

12/02/02 11:31:17 INFO namenode.FSNamesystem: fsOwner=hadoop

12/02/02 11:31:18 INFO namenode.FSNamesystem: supergroupsupergroup=supergroup

12/02/02 11:31:18 INFO namenode.FSNamesystem: isPermissionEnabled=true

12/02/02 11:31:18 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100

12/02/02 11:31:18 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)

12/02/02 11:31:18 INFO namenode.NameNode: Caching file names occuring more than 10 times

12/02/02 11:31:18 INFO common.Storage: Image file of size 112 saved in 0 seconds.

12/02/02 11:31:18 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.

12/02/02 11:31:18 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at hadoop1.test.com/127.0.0.1

************************************************************/

[hadoop@hadoop1 hadoop]$

9 启动hadoop 守护进程

[hadoop@hadoop1 hadoop]$ bin/start-all.sh

starting namenode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-namenode-hadoop1.test.com.out

datanode1: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop2.test.com.out

datanode2: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop3.test.com.out

datanode3: starting datanode, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop4.test.com.out

starting jobtracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-jobtracker-hadoop1.test.com.out

datanode1: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop2.test.com.out

datanode2: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop3.test.com.out

datanode3: starting tasktracker, logging to /home/hadoop/hadoop/bin/../logs/hadoop-hadoop-tasktracker-hadoop4.test.com.out

10 验证

//namenode

[hadoop@hadoop1 logs]$ jps

2883 JobTracker

3002 Jps

2769 NameNode

//datanode

[hadoop@hadoop2 ~]$ jps

2743 TaskTracker

2670 DataNode

2857 Jps

[hadoop@hadoop3 ~]$ jps

2742 TaskTracker

2856 Jps

2669 DataNode

[hadoop@hadoop4 ~]$ jps

2742 TaskTracker

2852 Jps

2659 DataNode

Hadoop 监控web页面

http://192.168.57.75:50070/dfshealth.jsp

五 简单验证HDFS

hadoop 的文件命令格式如下:

hadoop fs -cmd <args>

//建立目录

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -mkdir /test-hadoop

//査看目录

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -ls /

Found 2 items

drwxr-xr-x - hadoop supergroup 0 2012-02-02 13:32 /test-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp

//査看目录包括子目录

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /

drwxr-xr-x - hadoop supergroup 0 2012-02-02 13:32 /test-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred

drwx------ - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system

-rw------- 2 hadoop supergroup 4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

//添加文件

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -put /home/hadoop/hadoop-0.20.203.0rc1.tar.gz /test-hadoop

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /

drwxr-xr-x - hadoop supergroup 0 2012-02-02 13:34 /test-hadoop

-rw-r--r-- 2 hadoop supergroup 60569605 2012-02-02 13:34 /test-hadoop/hadoop-0.20.203.0rc1.tar.gz

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred

drwx------ - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system

-rw------- 2 hadoop supergroup 4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

//获取文件

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -get /test-hadoop/hadoop-0.20.203.0rc1.tar.gz /tmp/

[hadoop@hadoop1 hadoop]$ ls /tmp/*.tar.gz

/tmp/1.tar.gz /tmp/hadoop-0.20.203.0rc1.tar.gz

//删除文件

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -rm /test-hadoop/hadoop-0.20.203.0rc1.tar.gz

Deleted hdfs://namenode:9000/test-hadoop/hadoop-0.20.203.0rc1.tar.gz

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /

drwxr-xr-x - hadoop supergroup 0 2012-02-02 13:57 /test-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred

drwx------ - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system

-rw------- 2 hadoop supergroup 4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

drwxr-xr-x - hadoop supergroup 0 2012-02-02 13:36 /user

-rw-r--r-- 2 hadoop supergroup 321 2012-02-02 13:36 /user/hadoop

//删除目录

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -rmr /test-hadoop

Deleted hdfs://namenode:9000/test-hadoop

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -lsr /

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop

drwxr-xr-x - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred

drwx------ - hadoop supergroup 0 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system

-rw------- 2 hadoop supergroup 4 2012-02-02 11:32 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

drwxr-xr-x - hadoop supergroup 0 2012-02-02 13:36 /user

-rw-r--r-- 2 hadoop supergroup 321 2012-02-02 13:36 /user/hadoop

//hadoop fs 帮助(部分)

[hadoop@hadoop1 hadoop]$ bin/hadoop fs -help

hadoop fs is the command to execute fs commands. The full syntax is:

hadoop fs [-fs <local | file system URI>] [-conf <configuration file>]

[-D <propertyproperty=value>] [-ls <path>] [-lsr <path>] [-du <path>] 

[-dus <path>] [-mv <src> <dst>] [-cp <src> <dst>] [-rm [-skipTrash] <src>] 

[-rmr [-skipTrash] <src>] [-put <localsrc> ... <dst>] [-copyFromLocal <localsrc> ... <dst>] 

[-moveFromLocal <localsrc> ... <dst>] [-get [-ignoreCrc] [-crc] <src> <localdst> 

[-getmerge <src> <localdst> [addnl]] [-cat <src>] 

[-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>] [-moveToLocal <src> <localdst>] 

[-mkdir <path>] [-report] [-setrep [-R] [-w] <rep> <path/file>] 

[-touchz <path>] [-test -[ezd] <path>] [-stat [format] <path>] 

[-tail [-f] <path>] [-text <path>] 

[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...] 

[-chown [-R] [OWNER][:[GROUP]] PATH...] 

[-chgrp [-R] GROUP PATH...] 

[-count[-q] <path>] 

[-help [cmd]] 

结束Hadoop 环境搭建步骤繁琐,需要具备一定的Linux 系统知识,需要注意的是,通过以上步骤搭建的Hadoop 环境只能让你大体了解的hadoop ,如果想将HDFS 用于线上服务,还需对hadoop 配置文档做进一步配置 ,后续文档将继续以博文的形式发布,敬请期待。

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 194,088评论 5 459
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 81,715评论 2 371
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 141,361评论 0 319
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 52,099评论 1 263
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 60,987评论 4 355
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 46,063评论 1 272
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 36,486评论 3 381
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 35,175评论 0 253
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 39,440评论 1 290
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 34,518评论 2 309
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 36,305评论 1 326
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 32,190评论 3 312
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 37,550评论 3 298
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 28,880评论 0 17
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 30,152评论 1 250
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 41,451评论 2 341
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 40,637评论 2 335

推荐阅读更多精彩内容