杨校老师课堂之Hadoop环境搭建(二)

网友投稿 313 2022-11-20

杨校老师课堂之Hadoop环境搭建(二)

1. 了解​​Hadoop​​配置文件

1.1 当前所处位置,及内部文件如下

[root@hadoop1 hadoop-2.7.4]# ll总用量 116drwxr-xr-x. 2 20415 101 194 8月 1 2017 bindrwxr-xr-x. 3 20415 101 20 8月 1 2017 etcdrwxr-xr-x. 2 20415 101 106 8月 1 2017 includedrwxr-xr-x. 3 20415 101 20 8月 1 2017 libdrwxr-xr-x. 2 20415 101 239 8月 1 2017 libexec-rw-r--r--. 1 20415 101 86424 8月 1 2017 LICENSE.txt-rw-r--r--. 1 20415 101 14978 8月 1 2017 NOTICE.txt-rw-r--r--. 1 20415 101 1366 8月 1 2017 README.txtdrwxr-xr-x. 2 20415 101 4096 8月 1 2017 sbindrwxr-xr-x. 4 20415 101 31 8月 1 2017 sharedrwxr-xr-x. 19 20415 101 4096 8月 1 2017 src

1.2 首先查看所有配置文件

# 切换至hadoop内部配置文件所处位置[root@hadoop1 hadoop-2.7.4]# cd etc/hadoop/# 查询所有配置文件[root@hadoop1 hadoop]# ll总用量 152-rw-r--r--. 1 20415 101 4436 8月 1 2017 capacity-scheduler.xml-rw-r--r--. 1 20415 101 1335 8月 1 2017 configuration.xsl-rw-r--r--. 1 20415 101 318 8月 1 2017 container-executor.cfg-rw-r--r--. 1 20415 101 774 8月 1 2017 core-site.xml-rw-r--r--. 1 20415 101 3670 8月 1 2017 hadoop-env.cmd-rw-r--r--. 1 20415 101 4224 8月 1 2017 hadoop-env.sh-rw-r--r--. 1 20415 101 2598 8月 1 2017 hadoop-metrics2.properties-rw-r--r--. 1 20415 101 2490 8月 1 2017 hadoop-metrics.properties-rw-r--r--. 1 20415 101 9683 8月 1 2017 hadoop-policy.xml-rw-r--r--. 1 20415 101 775 8月 1 2017 hdfs-site.xml-rw-r--r--. 1 20415 101 1449 8月 1 2017 1 20415 101 1657 8月 1 2017 1 20415 101 21 8月 1 2017 1 20415 101 620 8月 1 2017 1 20415 101 3518 8月 1 2017 kms-acls.xml-rw-r--r--. 1 20415 101 1527 8月 1 2017 kms-env.sh-rw-r--r--. 1 20415 101 1631 8月 1 2017 kms-log4j.properties-rw-r--r--. 1 20415 101 5540 8月 1 2017 kms-site.xml-rw-r--r--. 1 20415 101 11237 8月 1 2017 log4j.properties-rw-r--r--. 1 20415 101 951 8月 1 2017 mapred-env.cmd-rw-r--r--. 1 20415 101 1383 8月 1 2017 mapred-env.sh-rw-r--r--. 1 20415 101 4113 8月 1 2017 mapred-queues.xml.template-rw-r--r--. 1 20415 101 758 8月 1 2017 mapred-site.xml.template-rw-r--r--. 1 20415 101 10 8月 1 2017 slaves-rw-r--r--. 1 20415 101 2316 8月 1 2017 ssl-client.xml.example-rw-r--r--. 1 20415 101 2697 8月 1 2017 ssl-server.xml.example-rw-r--r--. 1 20415 101 2250 8月 1 2017 yarn-env.cmd-rw-r--r--. 1 20415 101 4567 8月 1 2017 yarn-env.sh-rw-r--r--. 1 20415 101 690 8月 1 2017 yarn-site.xml

2. 修改​​Hadoop​​配置文件

2.1 采用​​vim​​​命令进行修改​​hadoop-env.sh​​

[root@hadoop1 hadoop]# vim hadoop-env.sh

2.2 修改​​core-site.xml​​

fs.defaultFS hdfs://hadoop1:9000 hadoop.tmp.dir /usr/local/hadoop/hadoop-2.7.4/tmp

2.3 修改​​hdfs-site.xml​​

3. 初始化文件系统

[root@hadoop1 hadoop]# hdfs namenode -format22/03/23 12:45:51 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = hadoop1/192.168.101.166STARTUP_MSG: args = [-format]STARTUP_MSG: version = 2.7.4STARTUP_MSG: classpath = /usr/local/hadoop/hadoop-2.7.4/etc/hadoop:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/hadoop-2.7.4/share/hadoop/common/lib/ build = -r cd915e1e8d9d0131462a0b7301586c175728a282; compiled by 'kshvachk' on 2017-08-01T00:29ZSTARTUP_MSG: java = 1.8.0_291************************************************************/22/03/23 12:45:51 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]22/03/23 12:45:51 INFO namenode.NameNode: createNameNode [-format]Formatting using clusterid: CID-e9ba1e78-c6f2-4c57-a449-74d280c526ee22/03/23 12:45:52 INFO namenode.FSNamesystem: No KeyProvider found.22/03/23 12:45:52 INFO namenode.FSNamesystem: fsLock is fair: true22/03/23 12:45:52 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false22/03/23 12:45:53 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100022/03/23 12:45:53 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true22/03/23 12:45:53 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00022/03/23 12:45:53 INFO blockmanagement.BlockManager: The block deletion will start around 2022 三月 23 12:45:5322/03/23 12:45:53 INFO util.GSet: Computing capacity for map BlocksMap22/03/23 12:45:53 INFO util.GSet: VM type = 64-bit22/03/23 12:45:53 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB22/03/23 12:45:53 INFO util.GSet: capacity = 2^21 = 2097152 entries22/03/23 12:45:53 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false22/03/23 12:45:53 INFO blockmanagement.BlockManager: defaultReplication = 122/03/23 12:45:53 INFO blockmanagement.BlockManager: maxReplication = 51222/03/23 12:45:53 INFO blockmanagement.BlockManager: minReplication = 122/03/23 12:45:53 INFO blockmanagement.BlockManager: maxReplicationStreams = 222/03/23 12:45:53 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300022/03/23 12:45:53 INFO blockmanagement.BlockManager: encryptDataTransfer = false22/03/23 12:45:53 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 100022/03/23 12:45:53 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)22/03/23 12:45:53 INFO namenode.FSNamesystem: supergroup = supergroup22/03/23 12:45:53 INFO namenode.FSNamesystem: isPermissionEnabled = true22/03/23 12:45:53 INFO namenode.FSNamesystem: HA Enabled: false22/03/23 12:45:53 INFO namenode.FSNamesystem: Append Enabled: true22/03/23 12:45:55 INFO util.GSet: Computing capacity for map INodeMap22/03/23 12:45:55 INFO util.GSet: VM type = 64-bit22/03/23 12:45:55 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB22/03/23 12:45:55 INFO util.GSet: capacity = 2^20 = 1048576 entries22/03/23 12:45:55 INFO namenode.FSDirectory: ACLs enabled? false22/03/23 12:45:55 INFO namenode.FSDirectory: XAttrs enabled? true22/03/23 12:45:55 INFO namenode.FSDirectory: Maximum size of an xattr: 1638422/03/23 12:45:55 INFO namenode.NameNode: Caching file names occuring more than 10 times22/03/23 12:45:55 INFO util.GSet: Computing capacity for map cachedBlocks22/03/23 12:45:55 INFO util.GSet: VM type = 64-bit22/03/23 12:45:55 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB22/03/23 12:45:55 INFO util.GSet: capacity = 2^18 = 262144 entries22/03/23 12:45:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603322/03/23 12:45:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 022/03/23 12:45:55 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 3000022/03/23 12:45:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1022/03/23 12:45:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1022/03/23 12:45:55 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2522/03/23 12:45:55 INFO namenode.FSNamesystem: Retry cache on namenode is enabled22/03/23 12:45:55 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis22/03/23 12:45:55 INFO util.GSet: Computing capacity for map NameNodeRetryCache22/03/23 12:45:55 INFO util.GSet: VM type = 64-bit22/03/23 12:45:55 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB22/03/23 12:45:55 INFO util.GSet: capacity = 2^15 = 32768 entries22/03/23 12:45:56 INFO namenode.FSImage: Allocated new BlockPoolId: BP-761487052-192.168.101.166-164801075645122/03/23 12:45:56 INFO common.Storage: Storage directory /usr/local/hadoop/hadoop-2.7.4/tmp/dfs/name has been successfully formatted.22/03/23 12:45:56 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop/hadoop-2.7.4/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression22/03/23 12:45:56 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop/hadoop-2.7.4/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.22/03/23 12:45:56 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 022/03/23 12:45:56 INFO util.ExitUtil: Exiting with status 022/03/23 12:45:56 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.101.166************************************************************/

4. 启动HDFS

4.1文件系统初始化成功,启动hdfs

[root@hadoop1 hadoop]# start-dfs.sh Starting namenodes on [hadoop1]The authenticity of host 'hadoop1 (fe80::60f5:7b31:bf63:ccef%ens33)' can't be established.ECDSA key fingerprint is SHA256:dLMHzLDwMPEHWjgXb+5N746rIfizy+vrHOaOWh3TsOE.ECDSA key fingerprint is MD5:5b:3a:cc:9e:2c:8f:37:3c:18:2c:cd:15:c9:a1:f0:11.Are you sure you want to continue connecting (yes/no)? yeshadoop1: Warning: Permanently added 'hadoop1,fe80::60f5:7b31:bf63:ccef%ens33' (ECDSA) to the list of known hosts.root@hadoop1's password: hadoop1: starting namenode, logging to /usr/local/hadoop/hadoop-2.7.4/logs/hadoop-root-namenode-hadoop1.outThe authenticity of host 'localhost (::1)' can't be established.ECDSA key fingerprint is SHA256:dLMHzLDwMPEHWjgXb+5N746rIfizy+vrHOaOWh3TsOE.ECDSA key fingerprint is MD5:5b:3a:cc:9e:2c:8f:37:3c:18:2c:cd:15:c9:a1:f0:11.Are you sure you want to continue connecting (yes/no)? yeslocalhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.root@localhost's password: localhost: starting datanode, logging to /usr/local/hadoop/hadoop-2.7.4/logs/hadoop-root-datanode-hadoop1.outStarting secondary namenodes [0.0.0.0]The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.ECDSA key fingerprint is SHA256:dLMHzLDwMPEHWjgXb+5N746rIfizy+vrHOaOWh3TsOE.ECDSA key fingerprint is MD5:5b:3a:cc:9e:2c:8f:37:3c:18:2c:cd:15:c9:a1:f0:11.Are you sure you want to continue connecting (yes/no)? yes0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.root@0.0.0.0's password: 0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-2.7.4/logs/hadoop-root-secondarynamenode-hadoop1.out

4.2 启动之后,通过​​jps​​指令查询所有的java进程

[root@hadoop1 hadoop]# jps70164 SecondaryNameNode69526 NameNode69883 DataNode71691 Jps

当下,在VMWare内浏览器中进行访问web页面(​​​杨校

分享是快乐的,也见证了个人成长历程,文章大多都是工作经验总结以及平时学习积累,基于自身认知不足之处在所难免,也请大家指正,共同进步。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:DAMA-CDMP真题详解_第一套试卷_006~010题
下一篇:指纹传感器与Arduino的接口(附代码)
相关文章

 发表评论

暂时没有评论,来抢沙发吧~