十年網(wǎng)站開發(fā)經(jīng)驗(yàn) + 多家企業(yè)客戶 + 靠譜的建站團(tuán)隊(duì)
量身定制 + 運(yùn)營(yíng)維護(hù)+專業(yè)推廣+無(wú)憂售后,網(wǎng)站問題一站解決
這篇文章給大家分享的是有關(guān)hadoop集群相關(guān)操作有哪些的內(nèi)容。小編覺得挺實(shí)用的,因此分享給大家做個(gè)參考,一起跟隨小編過來看看吧。

創(chuàng)新互聯(lián)專注于吉林網(wǎng)站建設(shè)服務(wù)及定制,我們擁有豐富的企業(yè)做網(wǎng)站經(jīng)驗(yàn)。 熱誠(chéng)為您提供吉林營(yíng)銷型網(wǎng)站建設(shè),吉林網(wǎng)站制作、吉林網(wǎng)頁(yè)設(shè)計(jì)、吉林網(wǎng)站官網(wǎng)定制、小程序制作服務(wù),打造吉林網(wǎng)絡(luò)公司原創(chuàng)品牌,更為您提供吉林網(wǎng)站排名全網(wǎng)營(yíng)銷落地服務(wù)。
hadoop集群
首先關(guān)閉 selinux,
vim /etc/selinux/config SELINUX=disabled
防火墻
systemctl stop firewalld systemctl disable firewalld
1.master和slave機(jī)都修改/etc/hostname
添加
192.168.1.129 hadoop1 192.168.1.130 hadoop2 192.168.1.132 hadoop3
2.免密碼登錄
master主機(jī)(hadoop1)
切換到/root/.ssh
ssh-keygen -t rsa
一直按回車
生成 id_rsa 和id_rsa.pub
cat id_rsa.pub >> master
將公鑰保存到master,發(fā)送到slave機(jī)器
scp master hadoop2:/root/.ssh/
登錄slave(hadoop2,hadoop3)
將master追加到authorized_keys
cat master>>authorized_keys
slave機(jī)同
3.配置
解壓hadoop-2.6.0.tar.gz到/usr/lib/目錄下
tar -zxvf hadoop-2.6.0.tar.gz -C /usr/lib/ cd /usr/lib/hadoop-2.6.0/etc/hadoop
配置文件
4.安裝zookeeper
配置環(huán)境變量
export JAVA_HOME=/usr/lib/jdk1.7.0_79 export MAVEN_HOME=/usr/lib/apache-maven-3.3.3 export LD_LIBRARY_PATH=/usr/lib/protobuf export ANT_HOME=/usr/lib/apache-ant-1.9.4 export ZOOKEEPER_HOME=/usr/lib/zookeeper-3.4.6 export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$LD_LIBRARY_PATH/bin:$ANT_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$ZOOKERPER_HOME/lib
4.1配置zookeeper/conf/ 將zoo_sample.cfg復(fù)制為zoo.cfg
cp zoo_sample.cfg zoo.cfg
修改
dataDir=/usr/lib/zookeeper-3.4.6/datas
增加
server.1=hadoop1:2888:3888 server.2=hadoop2:2888:3888 server.3=hadoop3:2888:3888
創(chuàng)建/usr/lib/zookeeper-3.4.6/datas并創(chuàng)建myid在myid中寫入對(duì)應(yīng)的數(shù)字
將zookeeper-3.4.6 拷貝到hadoop2 和hadoop3以及/etc/profile
運(yùn)行
hadoop1,hadoop2,hadoop3上執(zhí)行
zkServer.sh start
查看狀態(tài)
zkServer.sh status
有Mode: leader,Mode: follower等說明運(yùn)行正常
5.安裝hadoop
在master(hadoop1)上執(zhí)行
將前面編譯的hadoop-2.6.0.tar.gz 解壓到/usr/lib/
配置環(huán)境變量
export JAVA_HOME=/usr/lib/jdk1.7.0_79 export MAVEN_HOME=/usr/lib/apache-maven-3.3.3 export LD_LIBRARY_PATH=/usr/lib/protobuf export ANT_HOME=/usr/lib/apache-ant-1.9.4 export ZOOKEEPER_HOME=/usr/lib/zookeeper-3.4.6 export HADOOP_HOME=/usr/lib/hadoop-2.6.0 export PATH=$JAVA_HOME/bin:$MAVEN_HOME/bin:$LD_LIBRARY_PATH/bin:$ANT_HOME/bin:$ZOOKEEPER_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
(hadoop2,hadoop3中可以沒有maven等這些是編譯hadoop時(shí)候配置的)
5.1修改配置文件
cd hadoop-2.6.0/etc/hadoop
配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml、slaves)
5.1.1 hadoop-env.sh
export JAVA_HOME=/usr/lib/jdk1.7.0_79
5.1.2 core-site.xml
fs.defaultFS hdfs://cluster1 hadoop.tmp.dir /usr/lib/hadoop-2.6.0/tmp ha.zookeeper.quorum hadoop1:2181,hadoop2:2181,hadoop3:2181
5.1.3 hdfs-site.xml
dfs.replication 2 dfs.nameservices cluster1 dfs.ha.namenodes.cluster1 hadoop101,hadoop102 dfs.namenode.rpc-address.cluster1.hadoop101 hadoop1:9000 dfs.namenode.http-address.cluster1.hadoop101 hadoop1:50070 dfs.namenode.rpc-address.cluster1.hadoop102 hadoop2:9000 dfs.namenode.http-address.cluster1.hadoop102 hadoop2:50070 dfs.ha.automatic-failover.enabled.cluster1 true dfs.namenode.shared.edits.dir qjournal://hadoop2:8485;hadoop3:8485/cluster1 dfs.journalnode.edits.dir /usr/lib/hadoop-2.6.0/tmp/journal dfs.ha.fencing.methods sshfence dfs.ha.fencing.ssh.private-key-files /root/.ssh/id_rsa dfs.client.failover.proxy.provider.cluster1 org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
5.1.4 yarn-site.xml
yarn.resourcemanager.hostname hadoop1 yarn.nodemanager.aux-services mapreduce_shuffle
2.1.5 mapred-site.xml
mapreduce.framework.name yarn
2.1.6 slaves
hadoop2 hadoop3
6.集群?jiǎn)?dòng):
6.1格式化zookeeper集群
hadoop1中執(zhí)行
bin/hdfs zkfc -formatZK
6.2啟動(dòng)journalnode集群,在hadoop2和hadoop3當(dāng)中執(zhí)行
sbin/hadoop-daemon.sh start journalnode
6.3格式化namenode,啟動(dòng)namenode
在hadoop1當(dāng)中執(zhí)行
bin/hdfs namenode -format sbin/hadoop-daemon.sh start namenode
在hadoop2上執(zhí)行
bin/hdfs namenode -bootstrapStandby sbin/hadoop-daemon.sh start namenode
啟動(dòng)datanode 直接在hadoop1中執(zhí)行
sbin/hadoop-daemons.sh start datanode
啟動(dòng)zkfc,哪里有namenode就在哪里啟動(dòng)這個(gè)進(jìn)程
在hadoop1和hadoop2中執(zhí)行
sbin/hadoop-daemon.sh start zkfc
啟動(dòng)yarn 和resourcemanager,在hadoop1中執(zhí)行
sbin/start-yarn.sh start resourcemanager
在瀏覽器輸入
http://192.168.1.129:50070
Overview 'hadoop1:9000' (active)
http://192.168.1.130:50070/
Overview 'hadoop2:9000' (standby)
hadoop -fs ls /
查看hadoop目錄
感謝各位的閱讀!關(guān)于“hadoop集群相關(guān)操作有哪些”這篇文章就分享到這里了,希望以上內(nèi)容可以對(duì)大家有一定的幫助,讓大家可以學(xué)到更多知識(shí),如果覺得文章不錯(cuò),可以把它分享出去讓更多的人看到吧!