准备工作
下载Hadoop和JDK的安装包,Hadoop本文采用的版本是3.1.3,JDK推荐下载201版本的rpm包,201比202要稳定,同时还是jdk8商用免费的最后一个版本
准备好3台服务器
- 192.168.1.68 master
- 192.168.1.69 slave1
- 192.168.1.70 slave2
安装JDK
用rpm的方式安装,默认安装在/usr/java,安装好后不需要/etc/profile下额外配置环境变量
rpm -ivh jdk-8u201-linux-x64.rpm
#安装后查看
cd /usr/java/
ls
default -> /usr/java/latest
jdk1.8.0_201-amd64
latest -> /usr/java/jdk1.8.0_201-amd64
安装Hadoop
- 解压安装包到 /opt
- 文件夹重命名
cd /opt/
tar -zxvf /root/hadoop-3.1.3.tar.gz
mv hadoop-3.1.3/ hadoop
Hadoop集群配置
修改hosts
vim /etc/hosts
#增加如下配置
192.168.1.68 master
192.168.1.69 slave1
192.168.1.70 slave2
配置Hadoop环境变量
/etc/profile.d/ 下增加hadoop.sh环境变量配置文件,这样设置比直接在/etc/profile中配置更加便于维护,profile会把profile.d下的*.sh文件循环加入进来的
vim /etc/profile.d/hadoop.sh
#增加如下配置
export HADOOP_HOME=/opt/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
#保存后使配置生效
source /etc/profile
配置hadoop-env.sh
cd /opt/hadoop
vim etc/hadoop/hadoop-env.sh
#配置JAVA_HOME
export JAVA_HOME=/usr/java/latest
配置core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data</value>
</property>
配置hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
配置yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.webapp.ui2.enable</name>
<value>true</value>
</property>
配置mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
以上的配置三台机器同步配置
配置workers(只在master机器上配置即可)
slave1
slave2
配置机器间的免密登录
- master(192.168.1.68)
cd ~/.ssh/
ssh-keygen -t rsa -P '' #一路回车下去生成id_rsa和id_rsa.pub
ssh-copy-id -i id_rsa.pub root@192.168.1.68 #本机免密登录,输入yes和密码即可
ssh-copy-id -i id_rsa.pub root@192.168.1.69 #68到69免密,便于在master机器上执行脚本
ssh-copy-id -i id_rsa.pub root@192.168.1.70 #68到70免密
- slave1(192.168.1.69)
cd ~/.ssh/
ssh-keygen -t rsa -P ''
ssh-copy-id -i id_rsa.pub root@192.168.1.69
- slave2(192.168.1.70)
cd ~/.ssh/
ssh-keygen -t rsa -P ''
ssh-copy-id -i id_rsa.pub root@192.168.1.70
启动Hadoop
以下脚本均在master上执行
HDFS格式化
The first time you bring up HDFS, it must be formatted. Format a new distributed filesystem as hdfs:
hdfs namenode -format mycluster #执行成功后会在上面指定的目录/data下 生成namenode的存储目录
启动HDFS
start-dfs.sh
不出意外的情况,执行肯定是失败的,报如下错误:
Starting namenodes on [master]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [test-68]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.
修改hadoop-env.sh,添加如下内容:
export HDFS_DATANODE_USER=root
export HDFS_NAMENODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
重新运行start-dfs.sh,检查节点启动情况
- master
[root@test-68 hadoop]# jps
12304 SecondaryNameNode
12037 NameNode
12441 Jps
- slave1
[root@test-69 hadoop]# jps
11032 Jps
10831 DataNode
- slave2
[root@test-70 .ssh]# jps
11155 DataNode
11368 Jps
-
192.168.1.68:9870
启动YARN
yarn启动的时候也会报dfs类似的错误,不过上面hadoop-env.sh中我们把yarn的参数也设置过了
start-yarn.sh
查看各机器进程
- master
[root@test-68 hadoop]# jps
12304 SecondaryNameNode
13315 Jps
12037 NameNode
12985 ResourceManager
- slave1
[root@test-69 hadoop]# jps
11364 NodeManager
11463 Jps
10831 DataNode
- slave2
[root@test-70 .ssh]# jps
11155 DataNode
11850 Jps
11676 NodeManager
-192.168.1.68:8088
-
192.168.1.68:8088/ui2
以上
参考资料:Hadoop官方文档