linux怎么查看本机内存大小
270
2022-11-17
HDFS集群配置
1 node1-4关闭和禁用防火墙
#检查防火墙的状态[root@node1 ~]# systemctl status firewalld● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1)#关闭防火墙[root@node1 ~]# systemctl stop firewalld#禁用防火墙[root@node1 ~]# systemctl disable firewalld
2 环境变量配置
#node1上修改环境变量export HADOOP_HOME=/opt/hadoop-3.1.3export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin#Node2上修改环境变量:export HADOOP_HOME=/opt/hadoop-3.1.3export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin#node1-2上让配置文件生效:source /etc/profile#将node2的/etc/profile拷贝到node3、node4上并执行scp /etc/profile node[34]:`pwd`source /etc/profile
3 hadoop-env.sh配置
#进入$HADOOP_HOME/etc/hadoopcd /opt/hadoop-3.1.3/etc/hadoop/#修改hadoop-env.shexport JAVA_HOME=/usr/java/default
由于通过SSH远程启动进程的时候默认不会加载/etc/profile设置,JAVA_HOME变量就加载不到,需要手动指定
4 workers配置
修改workers(hadoop2.x为slaves)文件,指定datanode的位置 node2 node3 node4 注意:该文件中不能出现空行,添加的内容结尾也不能出现空格。
5 core-site.xml配置
6 hdfs-site.xml配置
7 拷贝到node2-node4上
#先将之打成压缩包[root@node1 opt]# tar -zcvf hadoop-3.1.3.tar.gz hadoop-3.1.3/#将/opt/hadoop-3.1.3.tar.gz scp到node2、node3、node4的对应目录中[root@node1 opt]# scp hadoop-3.1.3.tar.gz node2:/opt[root@node1 opt]# scp hadoop-3.1.3.tar.gz node3:/opt[root@node1 opt]# scp hadoop-3.1.3.tar.gz node4:/opt#node2、node3、node4分别解压tar -zxvf hadoop-3.1.3.tar.gz#node1、node2、node3、node4测试[root@node4 opt]# had #然后按下 Tab 制表符,能够自动补全为hadoop,说明环境变量是好的。#获取通过hadoop version命令测试[root@node4 opt]# hadoop versionHadoop 3.1.3Source code repository -r ba631c436b806728f8ec2f54ab1e289526c90579Compiled by ztang on 2019-09-12T02:47ZCompiled with protoc 2.5.0From source with checksum ec785077c385118ac91aadde5ec9799This command was run using /opt/hadoop-3.1.3/share/hadoop/common/hadoop-common-3.1.3.jar
8 格式化
#在node1上执行:[root@node1 ~]# hdfs namenode -format[root@node1 ~]# ll /var/itbaizhan/hadoop/full/dfs/name/current/总用量 16-rw-r--r-- 1 root root 391 10月 8 20:36 fsimage_0000000000000000000-rw-r--r-- 1 root root 62 10月 8 20:36 fsimage_0000000000000000000.md5-rw-r--r-- 1 root root 2 10月 8 20:36 seen_txid-rw-r--r-- 1 root root 216 10月 8 20:36 VERSION#在node1-4四个节点上执行jps,jps作用显示当前系统中的java进程[root@node1 ~]# jps2037 Jps[root@node2 ~]# jps1981 Jps[root@node3 ~]# jps1979 Jps[root@node4 ~]# jps1974 Jps#通过观察并没有发现除了jps之外并没有其它的java进程。#[root@node1 ~]# vim /var/itbaizhan/hadoop/full/dfs/name/current/VERSION#Sat Oct 09 10:42:49 CST 2021namespaceID=1536048782clusterID=CID-7ecb999c-ef5a-4396-bdc7-c9a741a797c4 #集群idcTime=1633747369798storageType=NAME_NODE #角色为NameNodeblockpoolID=BP-1438277808-192.168.20.101-1633747369798#本次格式化后块池的idlayoutVersion=-64
9 启动HDFS
#在node1启动HDFS集群[root@node1 ~]# start-dfs.sh#启动时出现如下错误信息Starting namenodes on [node1]ERROR: Attempting to operate on hdfs namenode as rootERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.Starting datanodesERROR: Attempting to operate on hdfs datanode as rootERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.Starting secondary namenodes [node2]ERROR: Attempting to operate on hdfs secondarynamenode as rootERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.#解决办法,就是修改start-dfs.sh,添加以下内容[root@node1 ~]# vim /opt/hadoop-3.1.3/sbin/start-dfs.sh HDFS_NAMENODE_USER=rootHDFS_DATANODE_USER=rootHDFS_DATANODE_SECURE_USER=rootHDFS_SECONDARYNAMENODE_USER=root#查看四个节点上对应的角色是否启动[root@node1 ~]# jps3947 Jps3534 NameNode[root@node2 ~]# jps3386 Jps3307 SecondaryNameNode3148 DataNode[root@node3 ~]# jps3303 Jps3144 DataNode[root@node4 ~]# jps3310 Jps3151 DataNode
版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。
发表评论
暂时没有评论,来抢沙发吧~