设置data1服务器
-
修改静态IP
#ubuntu17.10修改这个文件 sudo nano /etc/netplan/01-netcfg.yaml
01-netcfg.yaml
# This file describes the network interfaces available on your system # For more information, see netplan(5). # gateway4网关写错了无法上网 network: version: 2 renderer: networkd ethernets: ens33: dhcp4: no dhcp6: no addresses: [192.168.72.101/24] gateway4: 192.168.72.2 nameservers: addresses: [114.114.114.114, 8.8.8.8]
-
使修改后的IP生效
sudo netplan apply
-
编辑hostname主机名
sudo nano /etc/hostname #主机名改为data1 data1
-
编辑hosts文件
sudo nano /etc/hosts 127.0.0.1 localhost 127.0.1.1 ubuntu #添加hosts 192.168.72.100 master 192.168.72.101 data1 192.168.72.102 data2 192.168.72.103 data3
-
编辑core-site.xml
<configuration> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> </configuration>
-
编辑yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8025</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8050</value> </property> </configuration>
-
编辑mapred-site.xml
<configuration> <property> <name>mapred.job.tracker</name> <value>master:54311</value> </property> </configuration>
-
编辑hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value> </property> </configuration>
克隆虚拟机
-
克隆data2并初始化mac地址
-
修改hostname 和 IP地址
sudo nano /etc/hostname sudo nano /etc/netplan/*.yaml sudo netplan apply
-
-
克隆data3并初始化mac地址
- 修改hostname 和 IP地址
-
克隆master并初始化mac地址
- 修改hostname 和 IP地址
Master连接到data1、data2、data3创建HDFS目录
启动五个虚拟机
-
通过ssh连接data1、data2、data3,设置DataNode节点
#连接data1 ssh data1 #删除HDFS所有目录 sudo rm -rf /usr/local/hadoop/hadoop_data/hdfs #创建DataNode储存目录 mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode #将目录的所有者改为hduser sudo chown -R hduser:hduser /usr/local/hadoop exit
-
设置NameNode节点
#删除HDFS所有目录 sudo rm -rf /usr/local/hadoop/hadoop_data/hdfs #创建NameNode储存目录 mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode #将目录的所有者改为hduser sudo chown -R hduser:hduser /usr/local/hadoop #初始化Namenode HDFS目录 hadoop namenode -format
启动Hadoop Multi Node Cluster
#启动HDFS
start-dfs.sh
#启动YARN
start-yarn.sh
#启动所有
start-all.sh
-
Hadoop ResourceManager Web 界面
http://192.168.72.100:8088/
-
NameNode Web 界面
http://192.168.72.100:50070