3.5 HDFS基本命令
第3章 HDFS:分散式檔案系統
3.5 HDFS基本命令
HDFS命令官方文件:
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html
3.5.1 用法
-
[[email protected] ~]
# hdfs dfs
-
Usage: hadoop fs [generic options]
-
[-appendToFile
<localsrc>
...
<dst>]
-
[-cat [-ignoreCrc]
<src>
...]
-
[-checksum
<src>
...]
-
[-chgrp [-R] GROUP PATH...]
-
[-chmod [-R]
<MODE[,MODE]
...
| OCTALMODE> PATH...]
-
[-chown [-R] [OWNER][:[GROUP]] PATH...]
-
[-copyFromLocal [-f] [-p] [-l]
<localsrc>
...
<dst>]
-
[-copyToLocal [-p] [-ignoreCrc] [-crc]
<src>
...
<localdst>]
-
[-count [-q] [-h] [-v] [-x]
<path>
...]
-
[-cp [-f] [-p | -p[topax]]
<src>
...
<dst>]
-
[-createSnapshot
<snapshotDir> [
<snapshotName>]]
-
[-deleteSnapshot
<snapshotDir>
<snapshotName>]
-
[-df [-h] [
<path>
...]]
-
[-du [-s] [-h] [-x]
<path>
...]
-
[-expunge]
-
[-find
<path>
...
<expression>
...]
-
[-get [-p] [-ignoreCrc] [-crc]
<src>
...
<localdst>]
-
[-getfacl [-R]
<path>]
-
[-getfattr [-R] {-n name | -d} [-e en]
<path>]
-
[-getmerge [-nl]
<src>
<localdst>]
-
[-help [cmd
...]]
-
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [
<path>
...]]
-
[-mkdir [-p]
<path>
...]
-
[-moveFromLocal
<localsrc>
...
<dst>]
-
[-moveToLocal
<src>
<localdst>]
-
[-mv
<src>
...
<dst>]
-
[-put [-f] [-p] [-l]
<localsrc>
...
<dst>]
-
[-renameSnapshot
<snapshotDir>
<oldName>
<newName>]
-
[-rm [-f] [-r|-R] [-skipTrash]
<src>
...]
-
[-rmdir [--ignore-fail-on-non-empty]
<dir>
...]
-
[-setfacl [-R] [{-b|-k} {-m|-x
<acl_spec>}
<path>]|[--set
<acl_spec>
<path>]]
-
[-setfattr {-n name [-v value] | -x name}
<path>]
-
[-setrep [-R] [-w]
<rep>
<path>
...]
-
[-stat [format]
<path>
...]
-
[-tail [-f]
<file>]
-
[-test -[defsz]
<path>]
-
[-text [-ignoreCrc]
<src>
...]
-
[-touchz
<path>
...]
-
[-usage [cmd
...]]
-
Generic options supported are
-
-conf
<configuration file> specify an application configuration file
-
-D
<property=value> use value
for given property
-
-fs
<local|namenode:port> specify a namenode
-
-jt
<local|resourcemanager:port> specify a ResourceManager
-
-files
<comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-
-libjars
<comma separated list of jars> specify comma separated jar files to include
in the classpath.
-
-archives
<comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
-
The general command line syntax is
-
bin/hadoop command [genericOptions] [commandOptions]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
3.5.2 hdfs dfs -mkdir
The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
-
[root
@node1 ~]
# hdfs dfs -mkdir -p input
-
[root
@node1 ~]
# hdfs dfs -mkdir -p /abc
- 1
- 2
hdfs建立的目錄預設會放到/user/{username}/目錄下面,其中{username}是當前使用者名稱。所以input目錄應該在/user/root/下面。
在HDFS根目錄下建立abc目錄。
3.5.3 hdfs dfs -ls
-
[root
@node1 ~]
# hdfs dfs -ls /
-
Found
2 items
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
40 /abc
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
37 /user
-
[root
@node1 ~]
# hdfs dfs -ls /user
-
Found
1 items
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
37 /user/root
-
[root
@node1 ~]
# hdfs dfs -ls /user/root
-
Found
1 items
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
37 /user/root/input
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
3.5.4 hdfs dfs -put
Usage: hdfs dfs -put …
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
hdfs dfs -put localfile /user/hadoop/hadoopfile
hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir
hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.
第3章 HDFS:分散式檔案系統
3.5 HDFS基本命令
HDFS命令官方文件:
http://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html
3.5.1 用法
-
[[email protected] ~]
# hdfs dfs
-
Usage: hadoop fs [generic options]
-
[-appendToFile
<localsrc>
...
<dst>]
-
[-cat [-ignoreCrc]
<src>
...]
-
[-checksum
<src>
...]
-
[-chgrp [-R] GROUP PATH...]
-
[-chmod [-R]
<MODE[,MODE]
...
| OCTALMODE> PATH...]
-
[-chown [-R] [OWNER][:[GROUP]] PATH...]
-
[-copyFromLocal [-f] [-p] [-l]
<localsrc>
...
<dst>]
-
[-copyToLocal [-p] [-ignoreCrc] [-crc]
<src>
...
<localdst>]
-
[-count [-q] [-h] [-v] [-x]
<path>
...]
-
[-cp [-f] [-p | -p[topax]]
<src>
...
<dst>]
-
[-createSnapshot
<snapshotDir> [
<snapshotName>]]
-
[-deleteSnapshot
<snapshotDir>
<snapshotName>]
-
[-df [-h] [
<path>
...]]
-
[-du [-s] [-h] [-x]
<path>
...]
-
[-expunge]
-
[-find
<path>
...
<expression>
...]
-
[-get [-p] [-ignoreCrc] [-crc]
<src>
...
<localdst>]
-
[-getfacl [-R]
<path>]
-
[-getfattr [-R] {-n name | -d} [-e en]
<path>]
-
[-getmerge [-nl]
<src>
<localdst>]
-
[-help [cmd
...]]
-
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [
<path>
...]]
-
[-mkdir [-p]
<path>
...]
-
[-moveFromLocal
<localsrc>
...
<dst>]
-
[-moveToLocal
<src>
<localdst>]
-
[-mv
<src>
...
<dst>]
-
[-put [-f] [-p] [-l]
<localsrc>
...
<dst>]
-
[-renameSnapshot
<snapshotDir>
<oldName>
<newName>]
-
[-rm [-f] [-r|-R] [-skipTrash]
<src>
...]
-
[-rmdir [--ignore-fail-on-non-empty]
<dir>
...]
-
[-setfacl [-R] [{-b|-k} {-m|-x
<acl_spec>}
<path>]|[--set
<acl_spec>
<path>]]
-
[-setfattr {-n name [-v value] | -x name}
<path>]
-
[-setrep [-R] [-w]
<rep>
<path>
...]
-
[-stat [format]
<path>
...]
-
[-tail [-f]
<file>]
-
[-test -[defsz]
<path>]
-
[-text [-ignoreCrc]
<src>
...]
-
[-touchz
<path>
...]
-
[-usage [cmd
...]]
-
Generic options supported are
-
-conf
<configuration file> specify an application configuration file
-
-D
<property=value> use value
for given property
-
-fs
<local|namenode:port> specify a namenode
-
-jt
<local|resourcemanager:port> specify a ResourceManager
-
-files
<comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-
-libjars
<comma separated list of jars> specify comma separated jar files to include
in the classpath.
-
-archives
<comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
-
The general command line syntax is
-
bin/hadoop command [genericOptions] [commandOptions]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
3.5.2 hdfs dfs -mkdir
The -p option behavior is much like Unix mkdir -p, creating parent directories along the path.
-
[root
@node1 ~]
# hdfs dfs -mkdir -p input
-
[root
@node1 ~]
# hdfs dfs -mkdir -p /abc
- 1
- 2
hdfs建立的目錄預設會放到/user/{username}/目錄下面,其中{username}是當前使用者名稱。所以input目錄應該在/user/root/下面。
在HDFS根目錄下建立abc目錄。
3.5.3 hdfs dfs -ls
-
[root
@node1 ~]
# hdfs dfs -ls /
-
Found
2 items
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
40 /abc
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
37 /user
-
[root
@node1 ~]
# hdfs dfs -ls /user
-
Found
1 items
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
37 /user/root
-
[root
@node1 ~]
# hdfs dfs -ls /user/root
-
Found
1 items
-
drwxr-xr-x - root supergroup
0
2017-
05-
14 09
:
37 /user/root/input
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
3.5.4 hdfs dfs -put
Usage: hdfs dfs -put …
Copy single src, or multiple srcs from local file system to the destination file system. Also reads input from stdin and writes to destination file system.
hdfs dfs -put localfile /user/hadoop/hadoopfile
hdfs dfs -put localfile1 localfile2 /user/hadoop/hadoopdir
hdfs dfs -put localfile hdfs://nn.example.com/hadoop/hadoopfile
hdfs dfs -put - hdfs://nn.example.com/hadoop/hadoopfile Reads the input from stdin.
Exit Code:
Returns 0 on success and -1 on error.