博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Hadoop集群-HDFS集群中大数据运维常用的命令总结
阅读量:6697 次
发布时间:2019-06-25

本文共 87873 字,大约阅读时间需要 292 分钟。

               Hadoop集群-HDFS集群中大数据运维常用的命令总结

                                              作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

 

 

  本篇博客会简单涉及到滚动编辑,融合镜像文件,目录的空间配额等运维操作简介。话不多少,直接上命令便于以后查看。

 

一.查看hadf的帮助信息

[yinzhengjie@s101 ~]$ hdfsUsage: hdfs [--config confdir] [--loglevel loglevel] COMMAND       where COMMAND is one of:  dfs                  run a filesystem command on the file systems supported in Hadoop.  classpath            prints the classpath  namenode -format     format the DFS filesystem  secondarynamenode    run the DFS secondary namenode  namenode             run the DFS namenode  journalnode          run the DFS journalnode  zkfc                 run the ZK Failover Controller daemon  datanode             run a DFS datanode  dfsadmin             run a DFS admin client  haadmin              run a DFS HA admin client  fsck                 run a DFS filesystem checking utility  balancer             run a cluster balancing utility  jmxget               get JMX exported values from NameNode or DataNode.  mover                run a utility to move block replicas across                       storage types  oiv                  apply the offline fsimage viewer to an fsimage  oiv_legacy           apply the offline fsimage viewer to an legacy fsimage  oev                  apply the offline edits viewer to an edits file  fetchdt              fetch a delegation token from the NameNode  getconf              get config values from configuration  groups               get the groups which users belong to  snapshotDiff         diff two snapshots of a directory or diff the                       current directory contents with a snapshot  lsSnapshottableDir   list all snapshottable dirs owned by the current user                                                Use -help to see options  portmap              run a portmap service  nfs3                 run an NFS version 3 gateway  cacheadmin           configure the HDFS cache  crypto               configure HDFS encryption zones  storagepolicies      list/get/set block storage policies  version              print the versionMost commands print help when invoked w/o parameters.[yinzhengjie@s101 ~]$ which hdfs/soft/hadoop/bin/hdfs[yinzhengjie@s101 ~]$

  综上所述,hdfs有多个子选项,作为一枚新手建议从dfs入手,dfs子选项意思是在hdfs文件系统上运行当前系统的命令,而这些命令跟咱们学习的Linux命令长得几乎一样,接下来我们一起来看看如果使用它们吧。

 

二.hdfs与dfs结合使用的案例

  其实hdfs 和dfs 结合使用的话实际上调用的是hadoop fs这个命令。不信你自己看帮助信息如下:

[yinzhengjie@s101 ~]$ hdfs dfsUsage: hadoop fs [generic options]        [-appendToFile 
...
] [-cat [-ignoreCrc]
...] [-checksum
...] [-chgrp [-R] GROUP PATH...] [-chmod [-R]
PATH...] [-chown [-R] [OWNER][:[GROUP]] PATH...] [-copyFromLocal [-f] [-p] [-l]
...
] [-copyToLocal [-p] [-ignoreCrc] [-crc]
...
] [-count [-q] [-h]
...] [-cp [-f] [-p | -p[topax]]
...
] [-createSnapshot
[
]] [-deleteSnapshot
] [-df [-h] [
...]] [-du [-s] [-h]
...] [-expunge] [-find
...
...] [-get [-p] [-ignoreCrc] [-crc]
...
] [-getfacl [-R]
] [-getfattr [-R] {-n name | -d} [-e en]
] [-getmerge [-nl]
] [-help [cmd ...]] [-ls [-d] [-h] [-R] [
...]] [-mkdir [-p]
...] [-moveFromLocal
...
] [-moveToLocal
] [-mv
...
] [-put [-f] [-p] [-l]
...
] [-renameSnapshot
] [-rm [-f] [-r|-R] [-skipTrash]
...] [-rmdir [--ignore-fail-on-non-empty]
...] [-setfacl [-R] [{-b|-k} {-m|-x
}
]|[--set
]] [-setfattr {-n name [-v value] | -x name}
] [-setrep [-R] [-w]
...] [-stat [format]
...] [-tail [-f]
] [-test -[defsz]
] [-text [-ignoreCrc]
...] [-touchz
...] [-truncate [-w]
...] [-usage [cmd ...]]Generic options supported are-conf
specify an application configuration file-D
use value for given property-fs
specify a namenode-jt
specify a ResourceManager-files
specify comma separated files to be copied to the map reduce cluster-libjars
specify comma separated jar files to include in the classpath.-archives
specify comma separated archives to be unarchived on the compute machines.The general command line syntax isbin/hadoop command [genericOptions] [commandOptions][yinzhengjie@s101 ~]$

1>.查看hdfs子命令的帮助信息

[yinzhengjie@s101 ~]$ hdfs dfs -help ls-ls [-d] [-h] [-R] [
...] : List the contents that match the specified file pattern. If path is not specified, the contents of /user/
will be listed. Directory entries are of the form: permissions - userId groupId sizeOfDirectory(in bytes) modificationDate(yyyy-MM-dd HH:mm) directoryName and file entries are of the form: permissions numberOfReplicas userId groupId sizeOfFile(in bytes) modificationDate(yyyy-MM-dd HH:mm) fileName -d Directories are listed as plain files. -h Formats the sizes of files in a human-readable fashion rather than a number of bytes. -R Recursively list the contents of directories. [yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -help ls

2>.查看hdfs文件系统中已经存在的文件

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 2 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -ls /

3>.在hdfs文件系统中创建文件

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 2 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -touchz /1.txt[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup          0 2018-05-25 21:56 /1.txt-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -touchz /1.txt

4>.上传文件至根目录(在上传的过程中会产生一个以"*.Copying"字样的临时文件)

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup          0 2018-05-25 21:56 /1.txt-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -put hadoop-2.7.3.tar.gz /[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup          0 2018-05-25 21:56 /1.txt-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -put hadoop-2.7.3.tar.gz /

5>.在hdfs文件系统中下载文件

[yinzhengjie@s101 ~]$ lltotal 0drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoopdrwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup          0 2018-05-25 21:56 /1.txt-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -get /1.txt[yinzhengjie@s101 ~]$ lltotal 0-rw-r--r--. 1 yinzhengjie yinzhengjie  0 May 25 22:06 1.txtdrwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoopdrwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -get /1.txt

6>.在hdfs文件系统中删除文件

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup          0 2018-05-25 21:56 /1.txt-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -rm /1.txt18/05/25 22:08:07 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.Deleted /1.txt[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -rm /1.txt

7>.在hdfs文件系统中查看文件内容

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -cat /xrsync.sh#!/bin/bash#@author :yinzhengjie#blog:http://www.cnblogs.com/yinzhengjie#EMAIL:y1053419035@qq.com#判断用户是否传参if [ $# -lt 1 ];then        echo "请输入参数";        exitfi#获取文件路径file=$@#获取子路径filename=`basename $file`#获取父路径dirpath=`dirname $file`#获取完整路径cd $dirpathfullpath=`pwd -P`#同步文件到DataNodefor (( i=102;i<=104;i++ ))do        #使终端变绿色         tput setaf 2        echo =========== s$i %file ===========        #使终端变回原来的颜色,即白灰色        tput setaf 7        #远程执行命令        rsync -lr $filename `whoami`@s$i:$fullpath        #判断命令是否执行成功        if [ $? == 0 ];then                echo "命令执行成功"        fidone[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -cat /xrsync.sh

8>.在hdfs文件系统中创建目录

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -mkdir /shell[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:12 /shell-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -mkdir /shell

9>.在hdfs文件系统中修改文件名称(当然你可以可以用来移动文件到目录哟)

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:12 /shell-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /xcall.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -mv /xcall.sh /call.sh[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /call.sh-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:12 /shell-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -mv /xcall.sh /call.sh
[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 4 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /call.sh-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:12 /shell-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -mv /call.sh /shell[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:19 /shell-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -ls /shellFound 1 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /shell/call.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -mv /call.sh /shell

10>.在hdfs问系统中拷贝文件到目录

[yinzhengjie@s101 ~]$ hdfs dfs -ls /shellFound 1 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /shell/call.sh[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:19 /shell-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -cp /xrsync.sh /shell[yinzhengjie@s101 ~]$ hdfs dfs -ls /shellFound 2 items-rw-r--r--   3 yinzhengjie supergroup        517 2018-05-25 21:17 /shell/call.sh-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 22:21 /shell/xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -cp /xrsync.sh /shell

11>.递归删除目录

[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 3 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gzdrwxr-xr-x   - yinzhengjie supergroup          0 2018-05-25 22:21 /shell-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -rmr /shellrmr: DEPRECATED: Please use 'rm -r' instead.18/05/25 22:22:31 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.Deleted /shell[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 2 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 /xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -rmr /shell

12>.列出本地文件的内容(默认是hdfs文件系统哟)

[yinzhengjie@s101 ~]$ hdfs dfs -ls file:///home/yinzhengjie/Found 9 items-rw-------   1 yinzhengjie yinzhengjie        940 2018-05-25 19:17 file:///home/yinzhengjie/.bash_history-rw-r--r--   1 yinzhengjie yinzhengjie         18 2015-11-19 21:02 file:///home/yinzhengjie/.bash_logout-rw-r--r--   1 yinzhengjie yinzhengjie        193 2015-11-19 21:02 file:///home/yinzhengjie/.bash_profile-rw-r--r--   1 yinzhengjie yinzhengjie        231 2015-11-19 21:02 file:///home/yinzhengjie/.bashrcdrwxrwxr-x   - yinzhengjie yinzhengjie         39 2018-05-25 09:14 file:///home/yinzhengjie/.oracle_jre_usagedrwx------   - yinzhengjie yinzhengjie         76 2018-05-25 19:20 file:///home/yinzhengjie/.ssh-rw-r--r--   1 yinzhengjie yinzhengjie          0 2018-05-25 22:06 file:///home/yinzhengjie/1.txtdrwxrwxr-x   - yinzhengjie yinzhengjie         35 2018-05-25 19:08 file:///home/yinzhengjie/hadoopdrwxrwxr-x   - yinzhengjie yinzhengjie         96 2018-05-25 22:05 file:///home/yinzhengjie/shell[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -ls file:///home/yinzhengjie/
[yinzhengjie@s101 ~]$ hdfs dfs -ls hdfs:/Found 2 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-25 21:59 hdfs:///hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        700 2018-05-25 21:17 hdfs:///xrsync.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -ls hdfs:/

13>.追加文件内容到hdfs文件系统中的文件

[yinzhengjie@s101 ~]$ lltotal 390280drwxrwxr-x. 3 yinzhengjie yinzhengjie        16 May 27 00:01 hadoopdrwxr-xr-x. 9 yinzhengjie yinzhengjie      4096 Aug 17  2016 hadoop-2.7.3-rw-rw-r--. 1 yinzhengjie yinzhengjie 214092195 Aug 26  2016 hadoop-2.7.3.tar.gz-rw-rw-r--. 1 yinzhengjie yinzhengjie 185540433 May 17  2017 jdk-8u131-linux-x64.tar.gz-rwxrwxr-x. 1 yinzhengjie yinzhengjie       615 May 26 23:24 xcall.sh-rwxrwxr-x. 1 yinzhengjie yinzhengjie       742 May 26 23:29 xrsync.sh[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 2 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-27 00:16 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup        615 2018-05-27 00:15 /xcall.sh[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfs -appendToFile xrsync.sh /xcall.sh[yinzhengjie@s101 ~]$ hdfs dfs -ls /Found 2 items-rw-r--r--   3 yinzhengjie supergroup  214092195 2018-05-27 00:16 /hadoop-2.7.3.tar.gz-rw-r--r--   3 yinzhengjie supergroup       1357 2018-05-27 01:28 /xcall.sh[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfs -appendToFile xrsync.sh /xcall.sh

14>.格式化名称节点

[root@yinzhengjie ~]# hdfs namenode18/05/27 17:23:56 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = yinzhengjie/211.98.71.195STARTUP_MSG:   args = []STARTUP_MSG:   version = 2.7.3STARTUP_MSG:   classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jarSTARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41ZSTARTUP_MSG:   java = 1.8.0_131************************************************************/18/05/27 17:23:56 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]18/05/27 17:23:56 INFO namenode.NameNode: createNameNode []18/05/27 17:23:56 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties18/05/27 17:23:57 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).18/05/27 17:23:57 INFO impl.MetricsSystemImpl: NameNode metrics system started18/05/27 17:23:57 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost/18/05/27 17:23:57 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:5007018/05/27 17:23:57 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog18/05/27 17:23:57 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.18/05/27 17:23:57 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined18/05/27 17:23:57 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)18/05/27 17:23:57 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs18/05/27 17:23:57 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs18/05/27 17:23:57 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static18/05/27 17:23:57 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)18/05/27 17:23:57 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*18/05/27 17:23:57 INFO http.HttpServer2: Jetty bound to port 5007018/05/27 17:23:57 INFO mortbay.log: jetty-6.1.2618/05/27 17:23:58 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:5007018/05/27 17:23:58 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!18/05/27 17:23:58 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!18/05/27 17:23:58 INFO namenode.FSNamesystem: No KeyProvider found.18/05/27 17:23:58 INFO namenode.FSNamesystem: fsLock is fair:true18/05/27 17:23:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100018/05/27 17:23:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true18/05/27 17:23:58 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00018/05/27 17:23:58 INFO blockmanagement.BlockManager: The block deletion will start around 2018 May 27 17:23:5818/05/27 17:23:58 INFO util.GSet: Computing capacity for map BlocksMap18/05/27 17:23:58 INFO util.GSet: VM type       = 64-bit18/05/27 17:23:58 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB18/05/27 17:23:58 INFO util.GSet: capacity      = 2^21 = 2097152 entries18/05/27 17:23:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false18/05/27 17:23:58 INFO blockmanagement.BlockManager: defaultReplication         = 118/05/27 17:23:58 INFO blockmanagement.BlockManager: maxReplication             = 51218/05/27 17:23:58 INFO blockmanagement.BlockManager: minReplication             = 118/05/27 17:23:58 INFO blockmanagement.BlockManager: maxReplicationStreams      = 218/05/27 17:23:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300018/05/27 17:23:58 INFO blockmanagement.BlockManager: encryptDataTransfer        = false18/05/27 17:23:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 100018/05/27 17:23:58 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)18/05/27 17:23:58 INFO namenode.FSNamesystem: supergroup          = supergroup18/05/27 17:23:58 INFO namenode.FSNamesystem: isPermissionEnabled = true18/05/27 17:23:58 INFO namenode.FSNamesystem: HA Enabled: false18/05/27 17:23:58 INFO namenode.FSNamesystem: Append Enabled: true18/05/27 17:23:58 INFO util.GSet: Computing capacity for map INodeMap18/05/27 17:23:58 INFO util.GSet: VM type       = 64-bit18/05/27 17:23:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB18/05/27 17:23:58 INFO util.GSet: capacity      = 2^20 = 1048576 entries18/05/27 17:23:58 INFO namenode.FSDirectory: ACLs enabled? false18/05/27 17:23:58 INFO namenode.FSDirectory: XAttrs enabled? true18/05/27 17:23:58 INFO namenode.FSDirectory: Maximum size of an xattr: 1638418/05/27 17:23:58 INFO namenode.NameNode: Caching file names occuring more than 10 times18/05/27 17:23:58 INFO util.GSet: Computing capacity for map cachedBlocks18/05/27 17:23:58 INFO util.GSet: VM type       = 64-bit18/05/27 17:23:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB18/05/27 17:23:58 INFO util.GSet: capacity      = 2^18 = 262144 entries18/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603318/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 018/05/27 17:23:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 3000018/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1018/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1018/05/27 17:23:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2518/05/27 17:23:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled18/05/27 17:23:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis18/05/27 17:23:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache18/05/27 17:23:58 INFO util.GSet: VM type       = 64-bit18/05/27 17:23:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB18/05/27 17:23:58 INFO util.GSet: capacity      = 2^15 = 32768 entries18/05/27 17:23:58 WARN common.Storage: Storage directory /tmp/hadoop-root/dfs/name does not exist18/05/27 17:23:58 WARN namenode.FSNamesystem: Encountered exception loading fsimageorg.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327)    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681)    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585)    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645)    at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)18/05/27 17:23:58 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:5007018/05/27 17:23:58 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...18/05/27 17:23:58 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.18/05/27 17:23:58 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.18/05/27 17:23:58 ERROR namenode.NameNode: Failed to start namenode.org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-root/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:327) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:975) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:681) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:585) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:645) at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.
(NameNode.java:796) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)18/05/27 17:23:58 INFO util.ExitUtil: Exiting with status 118/05/27 17:23:58 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at yinzhengjie/211.98.71.195************************************************************/[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs namenode

15>.创建快照(关于快照更详细的用法请参考:https://www.cnblogs.com/yinzhengjie/p/9099529.html)

[root@yinzhengjie ~]# hdfs dfs -ls -R /drwxr-xr-x   - root supergroup          0 2018-05-27 20:37 /datadrwxr-xr-x   - root supergroup          0 2018-05-27 18:24 /data/etc-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/index.html-rw-r--r--   1 root supergroup         12 2018-05-27 20:28 /data/name.txt-rw-r--r--   1 root supergroup         11 2018-05-27 18:27 /data/yinzhengjie.sql[root@yinzhengjie ~]# [root@yinzhengjie ~]# echo "hello" > 1.txt[root@yinzhengjie ~]# [root@yinzhengjie ~]# echo "world" > 2.txt[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put 1.txt /data[root@yinzhengjie ~]# hdfs dfs -put 2.txt /data/etc[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R /drwxr-xr-x   - root supergroup          0 2018-05-27 20:58 /data-rw-r--r--   1 root supergroup          6 2018-05-27 20:58 /data/1.txtdrwxr-xr-x   - root supergroup          0 2018-05-27 20:58 /data/etc-rw-r--r--   1 root supergroup          6 2018-05-27 20:58 /data/etc/2.txt-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/index.html-rw-r--r--   1 root supergroup         12 2018-05-27 20:28 /data/name.txt-rw-r--r--   1 root supergroup         11 2018-05-27 18:27 /data/yinzhengjie.sql[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfsadmin -allowSnapshot /data                             #启用快照功能Allowing snaphot on /data succeeded[root@yinzhengjie ~]# hdfs dfs -createSnapshot /data firstSnapshot                    #创建快照并起名为“firstSnapshot”。Created snapshot /data/.snapshot/firstSnapshot[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R /data/.snapshot/firstSnapshot-rw-r--r--   1 root supergroup          6 2018-05-27 20:58 /data/.snapshot/firstSnapshot/1.txtdrwxr-xr-x   - root supergroup          0 2018-05-27 20:58 /data/.snapshot/firstSnapshot/etc-rw-r--r--   1 root supergroup          6 2018-05-27 20:58 /data/.snapshot/firstSnapshot/etc/2.txt-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/.snapshot/firstSnapshot/index.html-rw-r--r--   1 root supergroup         12 2018-05-27 20:28 /data/.snapshot/firstSnapshot/name.txt-rw-r--r--   1 root supergroup         11 2018-05-27 18:27 /data/.snapshot/firstSnapshot/yinzhengjie.sql[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -createSnapshot /data firstSnapshot

16>.重命名快照

[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/Found 1 itemsdrwxr-xr-x   - root supergroup          0 2018-05-27 21:02 /data/.snapshot/firstSnapshot[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -renameSnapshot /data firstSnapshot  newSnapshot                #将/data目录的firstSnapshot快照名称改名为newSnapshot[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/Found 1 itemsdrwxr-xr-x   - root supergroup          0 2018-05-27 21:02 /data/.snapshot/newSnapshot[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -renameSnapshot /data firstSnapshot newSnapshot

17>.删除快照

[root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/Found 1 itemsdrwxr-xr-x   - root supergroup          0 2018-05-27 21:02 /data/.snapshot/newSnapshot[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -deleteSnapshot /data newSnapshot[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls /data/.snapshot/[root@yinzhengjie ~]# [root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfs -deleteSnapshot /data newSnapshot

18>.查看hadoop的Sequencefile文件内容

[yinzhengjie@s101 data]$ hdfs dfs -text file:///home/yinzhengjie/data/seq 18/06/01 06:32:32 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library18/06/01 06:32:32 INFO compress.CodecPool: Got brand-new decompressor [.deflate]yinzhengjie     18[yinzhengjie@s101 data]$
[yinzhengjie@s101 data]$ hdfs dfs -text file:///home/yinzhengjie/data/seq

19>.

 

三.hdfs与getconf结合使用的案例

1>.获取NameNode的节点名称(可能包含多个)

[yinzhengjie@s101 ~]$ hdfs getconfhdfs getconf is utility for getting configuration information from the config file.hadoop getconf         [-namenodes]                    gets list of namenodes in the cluster.        [-secondaryNameNodes]                   gets list of secondary namenodes in the cluster.        [-backupNodes]                  gets list of backup nodes in the cluster.        [-includeFile]                  gets the include file path that defines the datanodes that can join the cluster.        [-excludeFile]                  gets the exclude file path that defines the datanodes that need to decommissioned.        [-nnRpcAddresses]                       gets the namenode rpc addresses        [-confKey [key]]                        gets a specific key from the configuration[yinzhengjie@s101 ~]$ hdfs getconf -namenodess101[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs getconf -namenodes

2>.获取hdfs最小块信息(默认大小为1M,即1048576字节,如果想要修改的话必须为512的倍数,因为HDFS底层传输数据是每512字节进行校验)

[yinzhengjie@s101 ~]$ hdfs getconfhdfs getconf is utility for getting configuration information from the config file.hadoop getconf         [-namenodes]                    gets list of namenodes in the cluster.        [-secondaryNameNodes]                   gets list of secondary namenodes in the cluster.        [-backupNodes]                  gets list of backup nodes in the cluster.        [-includeFile]                  gets the include file path that defines the datanodes that can join the cluster.        [-excludeFile]                  gets the exclude file path that defines the datanodes that need to decommissioned.        [-nnRpcAddresses]                       gets the namenode rpc addresses        [-confKey [key]]                        gets a specific key from the configuration[yinzhengjie@s101 ~]$ hdfs getconf -confKey dfs.namenode.fs-limits.min-block-size1048576[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs getconf -confKey dfs.namenode.fs-limits.min-block-size

 3>.

 

四.hdfs与dfsadmin结合使用的案例

1>.查看hdfs dfsadmin的帮助信息

[yinzhengjie@s101 ~]$ hdfs dfsadminUsage: hdfs dfsadminNote: Administrative commands can only be run as the HDFS superuser.        [-report [-live] [-dead] [-decommissioning]]        [-safemode 
] [-saveNamespace] [-rollEdits] [-restoreFailedStorage true|false|check] [-refreshNodes] [-setQuota
...
] [-clrQuota
...
] [-setSpaceQuota
[-storageType
]
...
] [-clrSpaceQuota [-storageType
]
...
] [-finalizeUpgrade] [-rollingUpgrade [
]] [-refreshServiceAcl] [-refreshUserToGroupsMappings] [-refreshSuperUserGroupsConfiguration] [-refreshCallQueue] [-refresh
[arg1..argn] [-reconfig
] [-printTopology] [-refreshNamenodes datanode_host:ipc_port] [-deleteBlockPool datanode_host:ipc_port blockpoolId [force]] [-setBalancerBandwidth
] [-fetchImage
] [-allowSnapshot
] [-disallowSnapshot
] [-shutdownDatanode
[upgrade]] [-getDatanodeInfo
] [-metasave filename] [-triggerBlockReport [-incremental]
] [-help [cmd]]Generic options supported are-conf
specify an application configuration file-D
use value for given property-fs
specify a namenode-jt
specify a ResourceManager-files
specify comma separated files to be copied to the map reduce cluster-libjars
specify comma separated jar files to include in the classpath.-archives
specify comma separated archives to be unarchived on the compute machines.The general command line syntax isbin/hadoop command [genericOptions] [commandOptions][yinzhengjie@s101 ~]$  

2>.查看指定命令的帮助信息

[yinzhengjie@s101 ~]$ hdfs dfsadmin -help rollEdits-rollEdits:     Rolls the edit log.[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfsadmin -help rollEdits

3>.手动滚动日志(关于日志滚动更详细的用法请参考:https://www.cnblogs.com/yinzhengjie/p/9098092.html)

[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:12 edits_0000000000000001090-0000000000000001091-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:13 edits_0000000000000001092-0000000000000001093-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:14 edits_0000000000000001094-0000000000000001095-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:14 edits_0000000000000001096-0000000000000001097-rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 09:14 edits_inprogress_0000000000000001098[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs dfsadmin -rollEditsSuccessfully rolled edit logs.New segment starts at txid 1100[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:13 edits_0000000000000001092-0000000000000001093-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:14 edits_0000000000000001094-0000000000000001095-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:14 edits_0000000000000001096-0000000000000001097-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 09:15 edits_0000000000000001098-0000000000000001099-rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 09:15 edits_inprogress_0000000000000001100[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfsadmin -rollEdits

4>.查看当前的模式

[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode getSafe mode is OFF[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode get

5>.进入安全模式

[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode getSafe mode is OFF[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode enterSafe mode is ON[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode getSafe mode is ON[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs dfsadmin -safemode enter

6>.目录配额(计算目录下的所有文件的总个数,如果为1,表示目录下不能放文件,即空目录!)

[root@yinzhengjie ~]# lltotal 16-rw-r--r--. 1 root root  6 May 27 17:41 index.html-rw-r--r--. 1 root root 12 May 27 17:42 nginx.conf-rw-r--r--. 1 root root 11 May 27 17:42 yinzhengjie.sql-rw-r--r--. 1 root root  7 May 27 18:20 zabbix.conf[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls /[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -mkdir -p /data/etc[root@yinzhengjie ~]# hdfs dfs -ls /Found 1 itemsdrwxr-xr-x   - root supergroup          0 2018-05-27 18:24 /data[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfsadmin -setQuota 3 /data[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put index.html /data[root@yinzhengjie ~]# hdfs dfs -put yinzhengjie.sql /dataput: The NameSpace quota (directories and files) of directory /data is exceeded: quota=3 file count=4[root@yinzhengjie ~]# hdfs dfs -ls /dataFound 2 itemsdrwxr-xr-x   - root supergroup          0 2018-05-27 18:24 /data/etc-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/index.html[root@yinzhengjie ~]# hdfs dfsadmin -setQuota 5 /data[root@yinzhengjie ~]# hdfs dfs -put yinzhengjie.sql /data[root@yinzhengjie ~]# hdfs dfs -ls /dataFound 3 itemsdrwxr-xr-x   - root supergroup          0 2018-05-27 18:24 /data/etc-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/index.html-rw-r--r--   1 root supergroup         11 2018-05-27 18:27 /data/yinzhengjie.sql[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -setQuota 5 /data

7>.空间配额(计算目录下所有文件的总大小,包括副本数,因此空间配最小的值可以得到一个等式:"空间配最小的值  >= 需要上传文件的实际大小 * 副本数")

[root@yinzhengjie ~]# lltotal 181196-rw-r--r--. 1 root root 185540433 May 27 19:34 jdk-8u131-linux-x64.tar.gz-rw-r--r--. 1 root root        12 May 27 19:38 name.txt[root@yinzhengjie ~]# [root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R /drwxr-xr-x   - root supergroup          0 2018-05-27 20:27 /datadrwxr-xr-x   - root supergroup          0 2018-05-27 18:24 /data/etc-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/index.html-rw-r--r--   1 root supergroup         11 2018-05-27 18:27 /data/yinzhengjie.sql[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfsadmin -setSpaceQuota 134217745  /data                            #这里设置/data 目录配额大小为128M,我测试机器是伪分布式,指定副本数为1,因此设置目录配个大小为128[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put name.txt /data                                            #上传文件的大小/data目录中去,发现可以正常上传[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -ls -R /                                                        #检查上传已经成功drwxr-xr-x   - root supergroup          0 2018-05-27 20:28 /datadrwxr-xr-x   - root supergroup          0 2018-05-27 18:24 /data/etc-rw-r--r--   1 root supergroup          6 2018-05-27 18:25 /data/index.html-rw-r--r--   1 root supergroup         12 2018-05-27 20:28 /data/name.txt-rw-r--r--   1 root supergroup         11 2018-05-27 18:27 /data/yinzhengjie.sql[root@yinzhengjie ~]# [root@yinzhengjie ~]# hdfs dfs -put jdk-8u131-linux-x64.tar.gz /data                        #当我们上传第二个文件时,就会报以下的错误!18/05/27 20:29:40 WARN hdfs.DFSClient: DataStreamer Exceptionorg.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /data is exceeded: quota = 134217745 B = 128.00 MB but diskspace consumed = 134217757 B = 128.00 MB    at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)    at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:878)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:707)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:666)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:491)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3573)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3157)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3038)    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)    at java.security.AccessController.doPrivileged(Native Method)    at javax.security.auth.Subject.doAs(Subject.java:422)    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)    at java.lang.reflect.Constructor.newInstance(Constructor.java:423)    at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)    at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1458)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1251)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:448)Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.protocol.DSQuotaExceededException): The DiskSpace quota of /data is exceeded: quota = 134217745 B = 128.00 MB but diskspace consumed = 134217757 B = 128.00 MB    at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyStoragespaceQuota(DirectoryWithQuotaFeature.java:211)    at org.apache.hadoop.hdfs.server.namenode.DirectoryWithQuotaFeature.verifyQuota(DirectoryWithQuotaFeature.java:239)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.verifyQuota(FSDirectory.java:878)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:707)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:666)    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:491)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.saveAllocatedBlock(FSNamesystem.java:3573)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3157)    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3038)    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:725)    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)    at java.security.AccessController.doPrivileged(Native Method)    at javax.security.auth.Subject.doAs(Subject.java:422)    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)    at org.apache.hadoop.ipc.Client.call(Client.java:1475)    at org.apache.hadoop.ipc.Client.call(Client.java:1412)    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)    at java.lang.reflect.Method.invoke(Method.java:498)    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)    at com.sun.proxy.$Proxy11.addBlock(Unknown Source)    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1455)    ... 2 moreput: The DiskSpace quota of /data is exceeded: quota = 134217745 B = 128.00 MB but diskspace consumed = 134217757 B = 128.00 MB[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -setSpaceQuota 134217745 /data

8>.清空配额管理

[root@yinzhengjie ~]# hdfs dfsadmin -clrSpaceQuota /data[root@yinzhengjie ~]# echo $?0[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -clrSpaceQuota /data

9>.对某个目录启用快照功能(快照功能默认为禁用状态)

[root@yinzhengjie ~]# hdfs dfsadmin -allowSnapShot /dataAllowing snaphot on /data succeeded[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -allowSnapShot /data

10>.对某个目录禁用快照功能

[root@yinzhengjie ~]# hdfs dfsadmin -disallowSnapShot /dataDisallowing snaphot on /data succeeded[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hdfs dfsadmin -disallowSnapShot /data

11>.

 

五.hdfs与fsck结合使用的案例

1>.fsck指令显示HDFS块信息

[yinzhengjie@s101 ~]$ hdfs fsck / -files -blocksConnecting to namenode via http://localhost:50070/fsck?ugi=yinzhengjie&files=1&blocks=1&path=%2FFSCK started by yinzhengjie (auth:SIMPLE) from /127.0.0.1 for path / at Sat May 26 20:07:16 PDT 2018/ 
/jdk-8u131-linux-x64.tar.gz 185540433 bytes, 2 block(s): OK0. BP-455760353-211.98.71.195-1527235200324:blk_1073741825_1001 len=134217728 repl=11. BP-455760353-211.98.71.195-1527235200324:blk_1073741826_1002 len=51322705 repl=1Status: HEALTHY Total size: 185540433 B Total dirs: 1 Total files: 1 Total symlinks: 0 Total blocks (validated): 2 (avg. block size 92770216 B) Minimally replicated blocks: 2 (100.0 %) Over-replicated blocks: 0 (0.0 %) Under-replicated blocks: 0 (0.0 %) Mis-replicated blocks: 0 (0.0 %) Default replication factor: 1 Average block replication: 1.0 Corrupt blocks: 0 Missing replicas: 0 (0.0 %) Number of data-nodes: 1 Number of racks: 1FSCK ended at Sat May 26 20:07:16 PDT 2018 in 12 millisecondsThe filesystem under path '/' is HEALTHY[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs fsck / -files -blocks

2>.

 

六.hdfs与oiv结合我使用案例

1>.查看hdfs oiv的帮助信息

[yinzhengjie@s101 ~]$ hdfs oivUsage: bin/hdfs oiv [OPTIONS] -i INPUTFILE -o OUTPUTFILEOffline Image ViewerView a Hadoop fsimage INPUTFILE using the specified PROCESSOR,saving the results in OUTPUTFILE.The oiv utility will attempt to parse correctly formed image filesand will abort fail with mal-formed image files.The tool works offline and does not require a running cluster inorder to process an image file.The following image processors are available:  * XML: This processor creates an XML document with all elements of    the fsimage enumerated, suitable for further analysis by XML    tools.  * FileDistribution: This processor analyzes the file size    distribution in the image.    -maxSize specifies the range [0, maxSize] of file sizes to be     analyzed (128GB by default).    -step defines the granularity of the distribution. (2MB by default)  * Web: Run a viewer to expose read-only WebHDFS API.    -addr specifies the address to listen. (localhost:5978 by default)  * Delimited (experimental): Generate a text file with all of the elements common    to both inodes and inodes-under-construction, separated by a    delimiter. The default delimiter is \t, though this may be    changed via the -delimiter argument.Required command line arguments:-i,--inputFile 
FSImage file to process.Optional command line arguments:-o,--outputFile
Name of output file. If the specified file exists, it will be overwritten. (output to stdout by default)-p,--processor
Select which type of processor to apply against image file. (XML|FileDistribution|Web|Delimited) (Web by default)-delimiter
Delimiting string to use with Delimited processor. -t,--temp
Use temporary dir to cache intermediate result to generate Delimited outputs. If not set, Delimited processor constructs the namespace in memory before outputting text.-h,--help Display usage information and exit[yinzhengjie@s101 ~]$

2>.使用oiv命令查询hadoop镜像文件

[yinzhengjie@s101 ~]$ lltotal 0drwxrwxr-x. 4 yinzhengjie yinzhengjie 35 May 25 19:08 hadoopdrwxrwxr-x. 2 yinzhengjie yinzhengjie 96 May 25 22:05 shell[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep fsimage-rw-rw-r--. 1 yinzhengjie yinzhengjie 1.2K May 27 06:02 fsimage_0000000000000000767-rw-rw-r--. 1 yinzhengjie yinzhengjie   62 May 27 06:02 fsimage_0000000000000000767.md5-rw-rw-r--. 1 yinzhengjie yinzhengjie 1.4K May 27 07:58 fsimage_0000000000000000932-rw-rw-r--. 1 yinzhengjie yinzhengjie   62 May 27 07:58 fsimage_0000000000000000932.md5[yinzhengjie@s101 ~]$ hdfs oiv -i ./hadoop/dfs/name/current/fsimage_0000000000000000767 -o yinzhengjie.xml -p XML[yinzhengjie@s101 ~]$ lltotal 8drwxrwxr-x. 4 yinzhengjie yinzhengjie   35 May 25 19:08 hadoopdrwxrwxr-x. 2 yinzhengjie yinzhengjie   96 May 25 22:05 shell-rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs oiv -i ./hadoop/dfs/name/current/fsimage_0000000000000000767 -o yinzhengjie.xml -p XML

3>.

 

七.hdfs与oev结合我使用案例

1>.查看hdfs oev的帮助信息

[yinzhengjie@s101 ~]$ hdfs oevUsage: bin/hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILEOffline edits viewerParse a Hadoop edits log file INPUT_FILE and save resultsin OUTPUT_FILE.Required command line arguments:-i,--inputFile 
edits file to process, xml (case insensitive) extension means XML format, any other filename means binary format-o,--outputFile
Name of output file. If the specified file exists, it will be overwritten, format of the file is determined by -p optionOptional command line arguments:-p,--processor
Select which type of processor to apply against image file, currently supported processors are: binary (native binary format that Hadoop uses), xml (default, XML format), stats (prints statistics about edits file)-h,--help Display usage information and exit-f,--fix-txids Renumber the transaction IDs in the input, so that there are no gaps or invalid transaction IDs.-r,--recover When reading binary edit logs, use recovery mode. This will give you the chance to skip corrupt parts of the edit log.-v,--verbose More verbose output, prints the input and output filenames, for processors that write to a file, also output to screen. On large image files this will dramatically increase processing time (default is false).Generic options supported are-conf
specify an application configuration file-D
use value for given property-fs
specify a namenode-jt
specify a ResourceManager-files
specify comma separated files to be copied to the map reduce cluster-libjars
specify comma separated jar files to include in the classpath.-archives
specify comma separated archives to be unarchived on the compute machines.The general command line syntax isbin/hadoop command [genericOptions] [commandOptions][yinzhengjie@s101 ~]$

2>.使用oev命令查询hadoop的编辑日志文件

[yinzhengjie@s101 ~]$ ls -lh /home/yinzhengjie/hadoop/dfs/name/current/ | grep edits | tail -5-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 08:33 edits_0000000000000001001-0000000000000001002-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 08:34 edits_0000000000000001003-0000000000000001004-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 08:35 edits_0000000000000001005-0000000000000001006-rw-rw-r--. 1 yinzhengjie yinzhengjie   42 May 27 08:36 edits_0000000000000001007-0000000000000001008-rw-rw-r--. 1 yinzhengjie yinzhengjie 1.0M May 27 08:36 edits_inprogress_0000000000000001009[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ lltotal 8drwxrwxr-x. 4 yinzhengjie yinzhengjie   35 May 25 19:08 hadoopdrwxrwxr-x. 2 yinzhengjie yinzhengjie   96 May 25 22:05 shell-rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_0000000000000001007-0000000000000001008 -o edits.xml -p XML[yinzhengjie@s101 ~]$ lltotal 12-rw-rw-r--. 1 yinzhengjie yinzhengjie  315 May 27 08:39 edits.xmldrwxrwxr-x. 4 yinzhengjie yinzhengjie   35 May 25 19:08 hadoopdrwxrwxr-x. 2 yinzhengjie yinzhengjie   96 May 25 22:05 shell-rw-rw-r--. 1 yinzhengjie yinzhengjie 4934 May 27 08:10 yinzhengjie.xml[yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ cat edits.xml 
-63
OP_START_LOG_SEGMENT
1007
OP_END_LOG_SEGMENT
1008
[yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hdfs oev -i ./hadoop/dfs/name/current/edits_0000000000000001007-0000000000000001008 -o edits.xml -p XML

3>.

 

八.hadoop命令介绍

  在上面我们以及提到过,"hadoop fs"其实就等价于“hdfs dfs”,但是hadoop有些命令是hdfs 命令所不支持的,我们举几个例子:

1>.检查压缩库本地安装情况

[yinzhengjie@s101 ~]$ hadoop checknative18/05/27 04:40:13 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version18/05/27 04:40:13 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib libraryNative library checking:hadoop:  true /soft/hadoop-2.7.3/lib/native/libhadoop.so.1.0.0zlib:    true /lib64/libz.so.1snappy:  false lz4:     true revision:99bzip2:   false openssl: false Cannot load libcrypto.so (libcrypto.so: cannot open shared object file: No such file or directory)![yinzhengjie@s101 ~]$
[yinzhengjie@s101 ~]$ hadoop checknative

2>.格式化名称节点

[root@yinzhengjie ~]# hadoop namenode -formatDEPRECATED: Use of this script to execute hdfs command is deprecated.Instead use the hdfs command for it.18/05/27 17:24:29 INFO namenode.NameNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG:   host = yinzhengjie/211.98.71.195STARTUP_MSG:   args = [-format]STARTUP_MSG:   version = 2.7.3STARTUP_MSG:   classpath = /soft/hadoop-2.7.3/etc/hadoop:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/soft/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/soft/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jarSTARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41ZSTARTUP_MSG:   java = 1.8.0_131************************************************************/18/05/27 17:24:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]18/05/27 17:24:29 INFO namenode.NameNode: createNameNode [-format]Formatting using clusterid: CID-36f63542-a60c-46d0-8df1-f8fa3273076418/05/27 17:24:30 INFO namenode.FSNamesystem: No KeyProvider found.18/05/27 17:24:30 INFO namenode.FSNamesystem: fsLock is fair:true18/05/27 17:24:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=100018/05/27 17:24:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true18/05/27 17:24:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.00018/05/27 17:24:30 INFO blockmanagement.BlockManager: The block deletion will start around 2018 May 27 17:24:3018/05/27 17:24:30 INFO util.GSet: Computing capacity for map BlocksMap18/05/27 17:24:30 INFO util.GSet: VM type       = 64-bit18/05/27 17:24:30 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB18/05/27 17:24:30 INFO util.GSet: capacity      = 2^21 = 2097152 entries18/05/27 17:24:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false18/05/27 17:24:30 INFO blockmanagement.BlockManager: defaultReplication         = 118/05/27 17:24:30 INFO blockmanagement.BlockManager: maxReplication             = 51218/05/27 17:24:30 INFO blockmanagement.BlockManager: minReplication             = 118/05/27 17:24:30 INFO blockmanagement.BlockManager: maxReplicationStreams      = 218/05/27 17:24:30 INFO blockmanagement.BlockManager: replicationRecheckInterval = 300018/05/27 17:24:30 INFO blockmanagement.BlockManager: encryptDataTransfer        = false18/05/27 17:24:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 100018/05/27 17:24:30 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)18/05/27 17:24:30 INFO namenode.FSNamesystem: supergroup          = supergroup18/05/27 17:24:30 INFO namenode.FSNamesystem: isPermissionEnabled = true18/05/27 17:24:30 INFO namenode.FSNamesystem: HA Enabled: false18/05/27 17:24:30 INFO namenode.FSNamesystem: Append Enabled: true18/05/27 17:24:30 INFO util.GSet: Computing capacity for map INodeMap18/05/27 17:24:30 INFO util.GSet: VM type       = 64-bit18/05/27 17:24:30 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB18/05/27 17:24:30 INFO util.GSet: capacity      = 2^20 = 1048576 entries18/05/27 17:24:30 INFO namenode.FSDirectory: ACLs enabled? false18/05/27 17:24:30 INFO namenode.FSDirectory: XAttrs enabled? true18/05/27 17:24:30 INFO namenode.FSDirectory: Maximum size of an xattr: 1638418/05/27 17:24:30 INFO namenode.NameNode: Caching file names occuring more than 10 times18/05/27 17:24:30 INFO util.GSet: Computing capacity for map cachedBlocks18/05/27 17:24:30 INFO util.GSet: VM type       = 64-bit18/05/27 17:24:30 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB18/05/27 17:24:30 INFO util.GSet: capacity      = 2^18 = 262144 entries18/05/27 17:24:30 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.999000012874603318/05/27 17:24:30 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 018/05/27 17:24:30 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 3000018/05/27 17:24:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 1018/05/27 17:24:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 1018/05/27 17:24:30 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,2518/05/27 17:24:30 INFO namenode.FSNamesystem: Retry cache on namenode is enabled18/05/27 17:24:30 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis18/05/27 17:24:30 INFO util.GSet: Computing capacity for map NameNodeRetryCache18/05/27 17:24:30 INFO util.GSet: VM type       = 64-bit18/05/27 17:24:30 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB18/05/27 17:24:30 INFO util.GSet: capacity      = 2^15 = 32768 entries18/05/27 17:24:30 INFO namenode.FSImage: Allocated new BlockPoolId: BP-430965362-211.98.71.195-152746707040418/05/27 17:24:30 INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.18/05/27 17:24:30 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression18/05/27 17:24:30 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-root/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 351 bytes saved in 0 seconds.18/05/27 17:24:30 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 018/05/27 17:24:30 INFO util.ExitUtil: Exiting with status 018/05/27 17:24:30 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down NameNode at yinzhengjie/211.98.71.195************************************************************/[root@yinzhengjie ~]#
[root@yinzhengjie ~]# hadoop namenode -format

3>.执行自定义jar包

[yinzhengjie@s101 data]$ hadoop jar YinzhengjieMapReduce-1.0-SNAPSHOT.jar cn.org.yinzhengjie.mapreduce.wordcount.WordCountApp /world.txt /out18/06/13 17:31:45 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.18/06/13 17:31:45 INFO input.FileInputFormat: Total input paths to process : 118/06/13 17:31:45 INFO mapreduce.JobSubmitter: number of splits:118/06/13 17:31:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1528935621892_000118/06/13 17:31:46 INFO impl.YarnClientImpl: Submitted application application_1528935621892_000118/06/13 17:31:46 INFO mapreduce.Job: The url to track the job: http://s101:8088/proxy/application_1528935621892_0001/18/06/13 17:31:46 INFO mapreduce.Job: Running job: job_1528935621892_000118/06/13 17:31:54 INFO mapreduce.Job: Job job_1528935621892_0001 running in uber mode : false18/06/13 17:31:54 INFO mapreduce.Job:  map 0% reduce 0%18/06/13 17:32:00 INFO mapreduce.Job:  map 100% reduce 0%18/06/13 17:32:07 INFO mapreduce.Job:  map 100% reduce 100%18/06/13 17:32:08 INFO mapreduce.Job: Job job_1528935621892_0001 completed successfully18/06/13 17:32:08 INFO mapreduce.Job: Counters: 49    File System Counters        FILE: Number of bytes read=1081        FILE: Number of bytes written=244043        FILE: Number of read operations=0        FILE: Number of large read operations=0        FILE: Number of write operations=0        HDFS: Number of bytes read=644        HDFS: Number of bytes written=613        HDFS: Number of read operations=6        HDFS: Number of large read operations=0        HDFS: Number of write operations=2    Job Counters         Launched map tasks=1        Launched reduce tasks=1        Data-local map tasks=1        Total time spent by all maps in occupied slots (ms)=3382        Total time spent by all reduces in occupied slots (ms)=4417        Total time spent by all map tasks (ms)=3382        Total time spent by all reduce tasks (ms)=4417        Total vcore-milliseconds taken by all map tasks=3382        Total vcore-milliseconds taken by all reduce tasks=4417        Total megabyte-milliseconds taken by all map tasks=3463168        Total megabyte-milliseconds taken by all reduce tasks=4523008    Map-Reduce Framework        Map input records=1        Map output records=87        Map output bytes=901        Map output materialized bytes=1081        Input split bytes=91        Combine input records=0        Combine output records=0        Reduce input groups=67        Reduce shuffle bytes=1081        Reduce input records=87        Reduce output records=67        Spilled Records=174        Shuffled Maps =1        Failed Shuffles=0        Merged Map outputs=1        GC time elapsed (ms)=205        CPU time spent (ms)=1570        Physical memory (bytes) snapshot=363290624        Virtual memory (bytes) snapshot=4190236672        Total committed heap usage (bytes)=211574784    Shuffle Errors        BAD_ID=0        CONNECTION=0        IO_ERROR=0        WRONG_LENGTH=0        WRONG_MAP=0        WRONG_REDUCE=0    File Input Format Counters         Bytes Read=553    File Output Format Counters         Bytes Written=613[yinzhengjie@s101 data]$
[yinzhengjie@s101 data]$ hadoop jar YinzhengjieMapReduce-1.0-SNAPSHOT.jar cn.org.yinzhengjie.mapreduce.wordcount.WordCountApp /world.txt /out

  关于“Hadoop fs”更多相关命令请参考我的笔记:

你可能感兴趣的文章
Java8函数式编程
查看>>
NOIP 货车运输
查看>>
.NET混淆器 Dotfuscator如何保护应用程序?控制流了解一下!
查看>>
前端知识体系收藏
查看>>
Eclipse开发Android程序如何在手机上运行
查看>>
阿里云对象存储OSS与文件存储NAS的区别
查看>>
[转]CNN目标检测(一):Faster RCNN详解
查看>>
Android 设备的CPU类型(通常称为”ABIs”)
查看>>
Java异常处理001:Maven clean package时Failed to clean project: Failed to delete
查看>>
21OGNL与ValueStack(VS)-静态方法访问
查看>>
Code Chef December Challenge 2018题解
查看>>
jetty xml解析
查看>>
windows下安装mysql 开机启动
查看>>
iphone-common-codes-ccteam源代码 CCNSArray.h
查看>>
Win32-Application的窗口和对话框
查看>>
3 .6 .5 优化Ad-Hoc工作负载
查看>>
ECMAScript3中数组方法
查看>>
tcp的发送端一个小包就能打破对端的delay_ack么?
查看>>
linux基础命令(-)
查看>>
盛大游戏杯第十五届上海大学程序设计联赛暨上海金马五校赛
查看>>