红联Linux门户
Linux帮助

CentOS安装和配置Hadoop2.2.0

发布时间:2015-04-21 10:51:22来源:linux网站作者:zhoudetiankong

Hadoop2.2.0的部署

机器环境:

操作系统:CentOS 6.4 64位系统

Hadoop版本:hadoop-2.2.0,在CentOS下自行编译后的64位版本。


操作步骤:

1.假设共四台机器,每台机器的用户名均设为myhadoop(为了安装配置的方便,另外也是为了权限问题)

机器名 IP地址 分配

hadoop1 10.172.169.191 namenode,ResourceManager

hadoop2 10.172.169.192 datanode,NodeManager

hadoop3 10.172.169.193 datanode , NodeManager

hadoop4 10.172.169.194 datanode , NodeManager


2.每台机器均安装好javajdk并配置好相应的环境变量。要求每台机器的安装路径以及java环境变量设置一致。


3.关闭防火墙

切换到root帐户

开启:chkconfigiptables on

关闭:chkconfigiptables off

重启后永久生效


4.每台都配置/etc/host文件

在root账户下,打开/etc/host文件,添加IP地址解析

hadoop1 10.172.169.191

hadoop2 10.172.169.192

hadoop3 10.172.169.193

hadoop4 10.172.169.194


5.配置ssh无密码登录

(1).在namenode机器上

cd /home/myhadoop

ssh-keygen -t rsa

一路回车

(2).导入公钥到本机认证文件

cat ~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys

(3).导入公钥到其他datanode节点认证文件

scp ~/.ssh/authorized_keys myhadoop@10.172.169.192:/home/myhadoop/.ssh/authorized_keys

scp ~/.ssh/authorized_keys  myhadoop@10.172.169.193:/home/myhadoop/.ssh/authorized_keys

scp ~/.ssh/authorized_keys  myhadoop@10.172.169.194:/home/myhadoop/.ssh/authorized_keys

(4).修改所有机器上的文件权限

chmod 700 ~/.ssh

chmod 600 ~/.ssh/authorized_keys

(5).测试是否可以ssh无密码登录。

如果namenode可以无密码登录到各个datanode机器,则说明配置成功。

6.安装Hadoop2.2.0

(1)解压hadoop-2.2.0.jar,到/home/myhadoop/目录下。

(2)修改配置文件。

打开hadoop-2.2.0/etc/hadoop,修改里面的配置文件

(2.1)hadoop-env.sh

找到JAVA_HOME,把路径改为实际地址

(2.2)yarn-env.sh

找到JAVA_HOME,把路径改为实际地址

(2.3)slave

配置所有datanode节点,示例目前如下:

hadoop2

hadoop3

hadoop4

(2.4) core-site.xml

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://hadoop1:9000</value>

</property>

<property>

<name>io.file.buffer.size</name>

<value>131072</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/home/myhadoop/hadoop-2.2.0/mytmp</value>

<description>A base for other temporarydirectories.</description>

</property>

<property>

<name>hadoop.proxyuser.root.hosts</name>

<value>hadoop1</value>

</property>

<property>

<name>hadoop.proxyuser.root.groups</name>

<value>*</value>

</property>

</configuration>

(2.5)hdfs-site.xml

<configuration>

<property>

<name>dfs.namenode.name.dir</name>

<value>/home/myhadoop/name</value>

<final>true</final>

</property>

<property>

<name>dfs.datanode.data.dir</name>

<value>/home/myhadoop/data</value>

<final>true</final>

</property>

<property>

<name>dfs.replication</name>

<value>3</value>

</property>

<property>

<name>dfs.permissions</name>

<value>false</value>

</property>

</configuration>

(2.6)mapred-site.xml

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>hadoop1:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>hadoop1:19888</value>

</property>

<property>

<name>mapreduce.jobhistory.intermediate-done-dir</name>

<value>/mr-history/tmp</value>

</property>

<property>

<name>mapreduce.jobhistory.done-dir</name>

<value>/mr-history/done</value>

</property>

</configuration>

(2.7)yarn-site.xml

<configuration>

<property>

<name>yarn.resourcemanager.address</name>

<value>hadoop1:18040</value>

</property>

<property>

<name>yarn.resourcemanager.scheduler.address</name>

<value>hadoop1:18030</value>

</property>

<property>

<name>yarn.resourcemanager.resource-tracker.address</name>

<value>hadoop1:18025</value>

</property>

<property>

<name>yarn.resourcemanager.admin.address</name>

<value>hadoop1:18041</value>

</property>

<property>

<name>yarn.resourcemanager.webapp.address</name>

<value>hadoop1:8088</value>

</property>

<property>

<name>yarn.nodemanager.local-dirs</name>

<value>/home/myhadoop/mynode/my</value>

</property>

<property>

<name>yarn.nodemanager.log-dirs</name>

<value>/home/myhadoop/mynode/logs</value>

</property>

<property>

<name>yarn.nodemanager.log.retain-seconds</name>

<value>10800</value>

</property>

<property>

<name>yarn.nodemanager.remote-app-log-dir</name>

<value>/logs</value>

</property>

<property>

<name>yarn.nodemanager.remote-app-log-dir-suffix</name>

<value>logs</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>-1</value>

</property>

<property>

<name>yarn.log-aggregation.retain-check-interval-seconds</name>

<value>-1</value>

</property>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

</configuration>

(3)将上述文件配置好后,将hadoop-2.2.0文件复制到其余datanode机器上的相同路径下。

(4)修改/etc/profile文件

切换root用户

找到export PATH USERLOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL,然后在其下添加配置路径:

#hadoop variable settings

export HADOOP_HOME=/home/myhadoop/hadoop-2.2.0

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_YARN_HOME=$HADOOP_HOME

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib

配置完成后需要重启电脑


7.hadoop的启动与关闭

(1)hadoop namenode的初始化,只需要第一次的时候初始化,之后就不需要了

cd /home/myhadoop/hadoop-2.2.0/bin

hdfs namenode -format

启动:在namenode机器上,进入/home/myhadoop/sbin

start-dfs.sh

start-yarn.sh

以上两个脚本可用start-all.sh代替。

mr-jobhistory-daemon.sh start historyserver

(3)关闭

stop-all.sh

mr-jobhistory-daemon.sh stop historyserver


8.web接口地址

启动hadoop后,在浏览器中输入地址查看

http://hadoop1:50070

http://hadoop1:8088

http://hadoop1:19888