CDH5.7.2-Hadoop集群CentOS7下的搭建(离线安装)
转之 blog.csdn.net/shawnhu007/article/details/52579204
说明:在本地虚拟机环境中进行CDH5.7.2集群的搭建,在搭建之前请先在虚拟机上装好三台CentOS7,三台内存分别为8g,2g,2g,
准备
JDK环境
版本:jdk-8u101-linux-x64.rpm
下载地址:oracle官网
mysql
rpm包:http://dev.mysql.com/get/Downloads/MySQL-5.6/MySQL-5.6.26-1.linux_glibc2.5.x86_64.rpm-bundle.tar
jdbc连接包mysql-connector-java.jar:
http://dev.mysql.com/downloads/connector/j/
CDH安装相关的包
cloudera manager包 :5.7.2 cloudera-manager-centos7-cm5.7.2_x86_64.tar.gz
下载地址:http://archive.cloudera.com/cm5/cm/5/cloudera-manager-centos7-cm5.7.2_x86_64.tar.gz
CDH-5.7.2-1.cdh5.7.2.p0.18-el7.parcel
CDH-5.7.2-1.cdh5.7.2.p0.18-el7.parcel.sha1
manifest.json
以上三个下载地址在http://archive.cloudera.com/cdh5/parcels/5.7.2/,注意centos要下载el7的,我就因为一开始不清楚下的el5,结果提示parcels不知道redhat7,搞了好久才还原到初始重新来过
虚拟机准备:相关安装略过,准备好三台环境一样的centos在本地虚拟机VMWare上,建议三台内存分别为8g,2g,2g,其中最大的那台可以当作主机,如果内存不够就设少点,三台虚拟机情况如下:
IP地址主机名说明
192.168.160.130hadoop1主节点master,datanode
192.168.160.131hadoop2datanode
192.168.160.132hadoop3datanode
安装jdk(每台机器都要)
首先卸载原有的openJDK
[root@hadoop1~]$java -versionjava version"1.7.0_75"OpenJDKRuntimeEnvironment(rhel-2.5.4.2.el7_0-x86_64 u75-b13)OpenJDK64-BitServerVM(build24.75-b04, mixed mode)[root@hadoop1~]$rpm -qa | grep jdkjava-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64[root@hadoop1~]# yum -y remove java-1.7.0-openjdk-1.7.0.75-2.5.4.2.el7_0.x86_64[root@hadoop1~]# yum -y remove java-1.7.0-openjdk-headless-1.7.0.75-2.5.4.2.el7_0.x86_64[root@hadoop1~]# java -versionbash:/usr/bin/java:Nosuch fileordirectory[root@hadoop1~]# rpm -ivh jdk-8u101-linux-x64.rpm[root@hadoop1~]# java -versionjava version"1.8.0_101"Java(TM)SERuntimeEnvironment(build1.8.0_101-b13)JavaHotSpot(TM)64-BitServerVM(build25.101-b13, mixed mode)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
修改每台节点服务器的有关配置hostname、selinux关闭,防火墙关闭
hostname修改:分别对三台都进行更改,并且注意每台名称和ip,每台都要配上hosts
[root@hadoop1~]# vi /etc/sysconfig/networkNETWORKING=yesHOSTNAME=hadoop1
1
2
3
[root@hadoop1~]# vi /etc/hosts127.0.0.1localhost.hadoop1192.168.160.132hadoop3192.168.160.130hadoop1192.168.160.131hadoop2
1
2
3
4
5
selinux关闭(所有节点官方文档要求)
[root@hadoop1~]# vim /etc/sysconfig/selinuxSELINUX=disabled重启才能生效重启后检查[root@hadoop1~]#sestatus -vSELinuxstatus:disabled表示已经关闭了
1
2
3
4
5
6
7
关闭防火墙(注意centos 6 防火墙好像是iptables)
[root@hadoop1~]# systemctl stop firewalld[root@hadoop1~]# systemctl disable firewalldrm'/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'rm'/etc/systemd/system/basic.target.wants/firewalld.service'[root@hadoop1~]# systemctl status firewalldfirewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
Active: inactive (dead)
1
2
3
4
5
6
7
8
NTP服务器设置(用于不同节点间实现时间同步)
NTP服务的安装和配置[root@hadoop1~]#yum -y install ntp更改master的节点[root@hadoop1~]## vi /etc/ntp.conf注释掉所有server *.*.*的指向,新添加一条可连接的ntp服务器(百度一下ntp服务器,我选的是上海交大的)server ntp.sjtu.edu.cniburst在其他节点上把ntp指向master服务器地址即可(/etc/ntp.conf下)server192.168.160.130iburst所有节点
1
2
3
4
5
6
7
8
9
SSH无密码登录配置
安装过程中master需要各个节点的root免登录密码
先在master上生成公钥
[root@hadoop1]# ssh-keygen -t rsaGeneratingpublic/privatersakeypair.Enter fileinwhichtosave thekey(/root/.ssh/id_rsa): [enter]Created directory'/root/.ssh'.Enter passphrase (emptyforno passphrase): [enter]Enter same passphrase again: [enter]Your identification has been savedin/root/.ssh/id_rsa.Yourpublickeyhas been savedin/root/.ssh/id_rsa.pub.Thekeyfingerprintis:1d:b1:99:51:31:d8:f6:6c:b1:84:f9:af:7b:2c:72:dd root@hadoop1Thekey's randomart image is:+--[ RSA2048]----+| o+++ || .*=.o || =. = o || . . * || S . . . || .|| +.|| . + E|| o.+ |+-----------------+[root@hadoop1]# ssh-copy-id 192.168.160.131The authenticityofhost'192.168.160.131 (192.168.160.131)' can't be established.ECDSAkeyfingerprintise6:81:3f:9e:e6:bb:43:36:c7:4e:0f:6f:63:b2:12:a0.Are you sure you wanttocontinueconnecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attemptingtologinwiththenewkey(s),tofilter out any that are already installed/usr/bin/ssh-copy-id: INFO:1key(s) remaintobe installed --ifyou are prompted now itistoinstall thenewkeysroot@192.168.160.131's password:Numberofkey(s) added:1Nowtryloggingintothe machine,with:"ssh '192.168.160.131'"andchecktomake sure that only thekey(s) you wanted were added.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
分别对两台节点进行以上操作,操作完了可以通过ssh ip进行验证是否可以无需输入密码就能直接登录到节点服务器上去
安装mysql
centos7自带的是mariadb,需要先卸载掉
[root@hadoop1]# rpm -qa | grep mariadbmariadb-libs-5.5.41-2.el7_0.x86_64[root@hadoop1huxin]# rpm -e --nodeps mariadb-libs-5.5.41-2.el7_0.x86_64
1
2
3
将下载好的mysql rpm包拷贝到服务器上然后解压
[root@hadoop1]# tar -xvf MySQL-5.6.24-1.linux_glibc2.5.x86_64.rpm-bundle.tar
1
然后安装释出的全部rpm:rpm -ivh MySQL-*.rpm
修改配置文件路径:cp /usr/share/mysql/my-default.cnf /etc/my.cnf
在配置文件中增加以下配置并保存
[root@hadoop1]#vim /etc/my.cnf[mysqld]default-storage-engine = innodbinnodb_file_per_tablecollation-server= utf8_general_ciinit-connect ='SET NAMES utf8'character-set-server= utf8
1
2
3
4
5
6
7
以上就是安装好了。然后就初始化mysql
然后初始化数据库执行
[root@hadoop1]#/usr/bin/mysql_install_db
1
注意这个时候我遇到以下问题:
"FATAL ERROR: please install the following Perl modules before executing /usr/local/mysql/scripts/mysql_install_db:Data::Dumper"
1
2
经过查询需要安装perl-Module
[root@hadoop1~]# yum install -y perl-Module-Install.noarch
1
等待安装完了然后就可以执行上面的初始化语句了
- 启动mysql[root@hadoop1]# service mysql restartERROR!MySQLserverPIDfile couldnotbe found!StartingMySQL...SUCCESS! - 查看mysql root初始化密码[root@hadoop1]# cat /root/.mysql_secret# The random password set for the root user at Fri Sep 16 11:13:25 2016 (local time): 9mp7uYFmgt6drdq3- 登录进行去更改密码[root@hadoop1]# mysql -u root -pmysql>SETPASSWORD=PASSWORD('123456');- 允许mysql远程访问mysql> update user set host='%'where user='root'andhost='localhost';QueryOK,1row affected (0.05sec)Rowsmatched:1Changed:1Warnings:0mysql> flush privileges;QueryOK,0rows affected (0.00sec)- 配置开机启动[root@hadoop1]# chkconfig mysql on
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
拷贝mysql-connector-java到各个节点指定目录下(所有的节点)
[root@hadoop1]# cp mysql-connector-java-5.1.36-bin.jar /usr/share/java/mysql-connector-java.jar
1
创建数据库
createdatabasehiveDEFAULTCHARSET utf8COLLATEutf8_general_ci;Query OK, 1 row affected (0.00 sec)createdatabaseamonDEFAULTCHARSET utf8COLLATEutf8_general_ci;Query OK, 1 row affected (0.00 sec)createdatabasehueDEFAULTCHARSET utf8COLLATEutf8_general_ci;Query OK, 1 row affected (0.00 sec)createdatabasemonitorDEFAULTCHARSET utf8COLLATEutf8_general_ci;Query OK, 1 row affected (0.00 sec)createdatabaseoozieDEFAULTCHARSET utf8COLLATEutf8_general_ci;Query OK, 1 row affected (0.00 sec)grantallon*.*toroot@"%"Identifiedby"123456";
1
2
3
4
5
6
7
8
9
10
11
解压cm tar包到指定目录所有服务器都要(或者在主节点解压好,然后通过scp到各个节点同一目录下)
[root@hadoop1~]#mkdir /opt/cloudera-manager[root@hadoop1~]# tar -axvf cloudera-manager-centos7-cm5.7.2_x86_64.tar.gz -C /opt/cloudera-manager
1
2
创建cloudera-scm用户(所有节点)
[root@hadoop1~]#useradd--system--home=/opt/cloudera-manager/cm-5.7.2/run/cloudera-scm-server--no-create-home--shell=/bin/false--comment"ClouderaSCMUser"cloudera-scm
1
在主节点创建cloudera-manager-server的本地元数据保存目录
[root@hadoop1~]# mkdir /var/cloudera-scm-server[root@hadoop1~]# chown cloudera-scm:cloudera-scm /var/cloudera-scm-server[root@hadoop1~]# chown cloudera-scm:cloudera-scm /opt/cloudera-manager
1
2
3
配置从节点cloudera-manger-agent指向主节点服务器
vim /opt/cloudera-manager/cm-5.7.2/etc/cloudera-scm-agent/config.ini将server_host改为CMS所在的主机名即hadoop1
1
2
主节点中创建parcel-repo仓库目录
[root@hadoop1~]# mkdir -p /opt/cloudera/parcel-repo[root@hadoop1~]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcel-repo[root@hadoop1~]# cp CDH-5.7.2-1.cdh5.7.2.p0.18-el7.parcel CDH-5.7.2-1.cdh5.7.2.p0.18-el7.parcel.sha manifest.json /opt/cloudera/parcel-repo注意:其中CDH-5.7.2-1.cdh5.7.2.p0.18-el5.parcel.sha1 后缀要把1去掉
1
2
3
4
所有节点创建parcels目录
[root@hadoop1~]# mkdir -p /opt/cloudera/parcels[root@hadoop1~]# chown cloudera-scm:cloudera-scm /opt/cloudera/parcels解释:Clouder-Manager将CDHs从主节点的/opt/cloudera/parcel-repo目录中抽取出来,分发解压激活到各个节点的/opt/cloudera/parcels目录中】
1
2
3
初始脚本配置数据库scm_prepare_database.sh(在主节点上)
[root@hadoop1 ~]# /opt/cloudera-manager/cm-5.7.2/share/cmf/schema/scm_prepare_database.sh mysql -hhadoop1 -uroot -p123456 --scm-host hadoop1 scmdbn scmdbu scmdbp说明:这个脚本就是用来创建和配置CMS需要的数据库的脚本。各参数是指:mysql:数据库用的是mysql,如果安装过程中用的oracle,那么该参数就应该改为oracle。-hhadoop1:数据库建立在hadoop1主机上面。也就是主节点上面。-uroot:root身份运行mysql。-123456:mysql的root密码是***。--scm-host hadoop1:CMS的主机,一般是和mysql安装的主机是在同一个主机上。最后三个参数是:数据库名,数据库用户名,数据库密码。
1
2
3
4
5
6
7
注意:如果执行的时候报一下错误,可能是配置host的有问题,127.0.0.1 localhost.hadoop1 这个localhost不能少
ERROR com.cloudera.enterprise.dbutil.DbProvisioner - Exceptionwhencreating/dropping databasewithuser'root'andjdbc url'jdbc:mysql://localhost/?useUnicode=true&characterEncoding=UTF-8'java.sql.SQLException:Accessdeniedforuser'root'@'localhost' (using password: YES)
1
2
这里我也遇到以下另一个问题
ERROR com.cloudera.enterprise.dbutil.DbProvisioner-Exception when creating/dropping databasewithuser'root'andjdbc url'jdbc:mysql://localhost/?useUnicode=true&characterEncoding=UTF-8'java.sql.SQLException: Your password has expired.Tologinyou must change it using a client that supports expired passwords.这里可以重新设置mysql的数据,然后刷新,或者直接将过期设置不检测mysql>update usersetpassword_expired='N'whereuser='root';Query OK,2rowsaffected (0.00sec)Rowsmatched:5Changed:2Warnings:0mysql>flush privileges;
1
2
3
4
5
6
7
8
启动主节点cloudera-scm-server
[root@hadoop1~]# cp /opt/cloudera-manager/cm-5.7.2/etc/init.d/cloudera-scm-server /etc/init.d/cloudera-scm-server[root@hadoop1~]#chkconfig cloudera-scm-server on
1
2
此时service cloudera-scm-serverstart的话会报错:“File not found: /usr/sbin/cmf-server”,因为cloudera-scm-server里面的变量路径配置不正确!
[root@hadoop1~]# vim /etc/init.d/cloudera-scm-serverCMF_DEFAULTS=${CMF_DEFAULTS:-/etc/default}改为=/opt/cloudera-manager/cm-5.7.2/etc/default此时service cloudera-scm-server start就不会报错了同时为了保证在每次服务器重启的时候都能启动cloudera-scm-server,应该在开机启动脚本/etc/rc.local中加入命令:service cloudera-scm-server restart
1
2
3
4
启动cloudera-scm-agent所有节点
[root@hadoopX~]# mkdir /opt/cloudera-manager/cm-5.7.2/run/cloudera-scm-agent[root@hadoopX~]# cp /opt/cloudera-manager/cm-5.7.2/etc/init.d/cloudera-scm-agent /etc/init.d/cloudera-scm-agent[root@hadoopX~]# chkconfig cloudera-scm-agent on同样此时service cloudera-scm-agent start的话会报错:Filenotfound:/usr/sbin/cmf-agent,因为cloudera-scm-agent里面的变量路径配置不正确!参照cms的配置同时为了保证在每次服务器重启的时候都能启动cloudera-scm-agent,应该在开机启动脚本/etc/rc.local中加入命令:service cloudera-scm-agent restart
1
2
3
4
5
等待主节点安装并且启动就在浏览器中进行操作了
进入192168.160.130:7180 默认使用admin admin登录
以下在浏览器中使用操作安装
选择express版本
组件提示
配置主机!由于我们在各个节点都安装启动了agent,并且在中各个节点都在配置文件中指向hadoop1是server节点,所以各个节点的agent就会给agent发消息报告,所以这里我们可以在“当前管理的主机”中看到三个主机,全部勾选并继续,注意如果cloudera-scm-agent没有设为开机启动,如果以上有重启这里可能会检测不到其他服务器。
选择cdh
分发parcels到各个节点
之间正确性的检测
选择要安装的服务,这里选择所有服务
角色分配
数据库设置选择
集群审核,这里都默认的
开始安装
安装完成
这个时候安装完成了,可以在浏览器中进入192.168.160.130:7180地址,查看集群情况,我这里有挺多报警,大概查看下基本都是内存或者存储空间使用阈值的报警,由于我们是本地虚拟机的,所以这些条件都有限,这里暂时不care这些报警了
安装完成!!!