1. 程式人生 > >Permission denied: user=root, access=WRITE,inode=

Permission denied: user=root, access=WRITE,inode=

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3529)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3512)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:3494)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6599)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4384)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4354)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4327)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:873)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:323)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:618)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)

	at org.apache.hadoop.ipc.Client.call(Client.java:1472)
	at org.apache.hadoop.ipc.Client.call(Client.java:1409)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)
	at com.sun.proxy.$Proxy16.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)
	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
	at com.sun.proxy.$Proxy17.mkdirs(Unknown Source)
	at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3110)
	... 42 more
Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkFsPermission(DefaultAuthorizationProvider.java:279)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:260)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.check(DefaultAuthorizationProvider.java:240)
	at org.apache.hadoop.hdfs.server.namenode.DefaultAuthorizationProvider.checkPermission(DefaultAuthorizationProvider.java:162)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:152)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3529)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:3512)
	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:3494)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6599)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4384)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4354)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4327)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:873)
	at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.mkdirs(AuthorizationProviderProxyClientProtocol.java:323)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:618)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1796)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)
)'
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
hive> drop table ml_123;

root使用者無法操作hdfs

有2種解決:

 1.修改hdfs引數  dfs.permissions=false

 2.賦許可權給root

我選擇的是修改引數

步驟1.


2.重啟HDFS元件,讓其生效。

3.重啟成功後,shell重新登入,root執行也ok

[[email protected] run]# hive
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hbase/lib/phoenix-4.8.0-cdh5.8.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hbase/lib/phoenix-4.8.0-cdh5.8.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/jars/hive-common-1.1.0-cdh5.10.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> use prestat;
OK
Time taken: 0.364 seconds
hive> insert overwrite table prestat.st_u_cl_hour partition(day=20170801,minute='1100')
    >  select '2','101999' ;
Query ID = root_20170804144343_4c49ea53-41cf-4b74-9c43-1e666993e368
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1501750011201_0002, Tracking URL = http://slave02:8088/proxy/application_1501750011201_0002/
Kill Command = /opt/cloudera/parcels/CDH-5.10.0-1.cdh5.10.0.p0.41/lib/hadoop/bin/hadoop job  -kill job_1501750011201_0002
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2017-08-04 14:43:38,966 Stage-1 map = 0%,  reduce = 0%
2017-08-04 14:43:45,251 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.49 sec
MapReduce Total cumulative CPU time: 1 seconds 490 msec
Ended Job = job_1501750011201_0002
Stage-4 is selected by condition resolver.
Stage-3 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.
Moving data to: hdfs://master:8020/user/hive/warehouse/prestat.db/st_u_cl_hour/day=20170801/minute=1100/.hive-staging_hive_2017-08-04_14-43-26_964_7376815516924274909-1/-ext-10000
Loading data to table prestat.st_u_cl_hour partition (day=20170801, minute=1100)
Partition prestat.st_u_cl_hour{day=20170801, minute=1100} stats: [numFiles=1, numRows=1, totalSize=307, rawDataSize=2]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1   Cumulative CPU: 1.49 sec   HDFS Read: 3568 HDFS Write: 408 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 490 msec
OK
Time taken: 21.608 seconds
hive> 


相關推薦

Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-xhd

問題分析 開始仔細的觀察了這個錯誤的詳細資訊,看到user=Administrator, access=WRITE。這裡的user其實是我當前系統(執行客戶端的計算機的作業系統)的使用者名稱,實際期望這裡的user=hadoop(hadoop是我的HADOOP上面的使用者名稱),但是它取的是當前

Hadoop--Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:dr

問題 在使用java程式訪問hdfs://localhost:9000埠進行建立目錄時候,出現許可權問題 Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:dr 解決方法

部署CM報錯(4):hdfs上建立檔案,報錯mkdir: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:d

1.問題描述 在hdfs上建立目錄報錯: mkdir: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x 2.問題原因 hdfs上許可權限制,root是沒有許可權的。除非取

Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-xhd

hadoop中執行操作時錯誤提示如下: Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x 許可權問題,需要修改目標操作的許可權,或者修改操作的使用者。 可以用

Permission denied: user=root, access=WRITE,inode=

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access

Permission denied: user=root, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x

今天在使用之前用CDH裝的叢集中的Hive時,一些常規的操作可以執行,但是使用了select操作的時候就會報出下面的錯誤: org.apache.hadoop.security.AccessControlException: Permission deni

如何處理 Permission denied: user=root, access=WRITE, inode="/user" 這類Hadoop許可權問題問題

當我們用cloudera安裝好Hadoop等元件時我們在操作HDFS時可能會遇到這樣的問題 無論是用sudo hadoop dfs -mkdir 建立檔案 還是 put檔案,都會顯示 Permission denied: user=root, access=WRITE,

解決Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x 問題方法

當我們用cloudera安裝好Hadoop等元件時我們在操作HDFS時可能會遇到這樣的問題             Permission denied: user=root, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-

報錯Permission denied: user=root, access=EXECUTE, inode="/tmp/hadoop-root"解決辦法

報錯Permission denied: user=root, access=EXECUTE, inode="/tmp/hadoop-root"解決辦法 hadoop fs -chown -R root:root /tmp 授予root操作hadoop分散式檔案系統的目錄/tmp

Permission denied: user=root, access=EXECUTE, inode="/tmp/hadoop-yarn":grid:supergroup:drwx------

錯誤資訊: Permission denied: user=root, access=READ_EXECUTE, inode="/tmp":hadoop:supergroup:drwx------

hive之許可權問題AccessControlException Permission denied: user=root, access=WR

問題描述:在叢集上,用hive分析資料出現如下錯誤 FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Got exception: org.

beeline執行insert命令時報錯Permission denied: user=anonymous, access=EXECUTE, inode="/tmp/hadoop-yarn":xiao

0: jdbc:hive2://localhost:10000/cr> insert into student values(4,'sunny'); WARNING: Hive-on-MR is

Permission denied: user=dr.who, access=READ_EXECUTE, inode="/tmp":student:supergroup:drwx------權限問題

群組 hdfs -c pass 修改 etc 意思 用戶名 miss 在查看browse directory時,點擊tmp,無法進入,報錯:“Permission denied: user=dr.who, access=READ_EXECUTE, inode="/tmp":

Access denied for userroot‘@‘localhost‘ (using p

異常異常信息: Access denied for user ‘root‘@‘localhost‘ (using password: YES) java.sql.SQLException: Access denied for user ‘root‘@‘localhost‘ (using password:

ERROR 1045 (28000): Access denied for userroot‘@

align 情況 找到 pda pri 依然 left fail tex ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using passwor 錯誤描述

Access Denied for user root @localhost 解決方案

my.ini chan acc exit log users err net ces 問題描述: C:\Users\bo.wang> mysql -u root -p Enter password: ERROR 1045 (28000): Access denied

解決MySQL登入ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using passwor)問題

問題描述 今天在MAC上安裝完MYSQL後,MYSQL預設給分配了一個預設密碼,但當自己在終端上使用預設密碼登入的時候,總會提示一個授權失敗的錯誤:Access denied for user ‘root’@’localhost’ (using passwor)如圖

mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost

ubutun裝mysql,步驟: sudo apt-get install mysql-server sudo apt isntall mysql-client sudo apt install libmysqlclient-dev 安裝成功後可以通過下面的命

MySQL報錯:java.sql.SQLException: Access denied for user 'root'@'localhost' (using password: YES)

1、使用者名稱密碼錯誤 開啟命令視窗,進入MySQL的bin目錄,一般是在C:\Program Files\MySQL\MySQL Server 5.7\bin,輸入命令: mysql -u [username] -p 如果能進不去,則是使用者名稱密碼錯誤。如果能進

MYSQL重置密碼 MySQL ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password:YES)

一般這個錯誤是由密碼錯誤引起,解決的辦法自然就是重置密碼。 假設我們使用的是root賬戶。 1.重置密碼的第一步就是跳過MySQL的密碼認證過程,方法如下: #vim /etc/my.cnf(注:windows下修改的是my.ini) 在文件內搜尋mysqld定位