Advertisement
Guest User

Failed to read data from "/user/guest/Batting.csv" permiss

a guest
Dec 11th, 2015
154
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 27.78 KB | None | 0 0
  1. -rw-r--r-- 1 root hadoop    9126 2015-12-11 19:16 pig_1449861377791.log
  2. -rw-r--r-- 1 root hadoop    9126 2015-12-11 19:23 pig_1449861813663.log
  3. -rw-r--r-- 1 root hadoop    9126 2015-12-11 19:26 pig_1449861996576.log
  4. -rwxrwxrwx 1 root hadoop 3543194 2012-02-06 19:05 Pitching.csv
  5. -rwxrwxrwx 1 root hadoop  367382 2012-02-07 23:46 PitchingPost.csv
  6. -rwxrwxrwx 1 root hadoop   30363 2012-02-08 00:04 readme59.txt
  7. -rwxrwxrwx 1 root hadoop  842057 2011-11-15 22:11 Salaries.csv
  8. -rwxrwxrwx 1 root hadoop   50365 2011-11-15 22:11 Schools.csv
  9. -rwxrwxrwx 1 root hadoop  205354 2011-11-15 22:11 SchoolsPlayers.csv
  10. -rwxrwxrwx 1 root hadoop    8088 2012-02-07 23:34 SeriesPost.csv
  11. -rwxrwxrwx 1 root hadoop  524526 2012-02-03 23:50 Teams.csv
  12. -rwxrwxrwx 1 root hadoop    4111 2011-11-15 22:11 TeamsFranchises.csv
  13. -rwxrwxrwx 1 root hadoop    2149 2011-11-15 22:11 TeamsHalf.csv
  14. [root@sandbox lahman591-csv]# hdfs dfs -ls /user/guest
  15. Found 1 items
  16. -rwxrwxrwx   3 root sandbox    6398886 2015-12-11 00:53 /user/guest/Batting.csv
  17. [root@sandbox lahman591-csv]# su hdfs; dfs -chown -R root:hadoop /user/guest
  18. [hdfs@sandbox lahman591-csv]$ hdfs dfs -chown -R root:hadoop /user/guest
  19. [hdfs@sandbox lahman591-csv]$ exit
  20. exit
  21. -bash: dfs: command not found
  22. [root@sandbox lahman591-csv]# hdfs dfs -ls /user/guest
  23. Found 1 items
  24. -rwxrwxrwx   3 root hadoop    6398886 2015-12-11 00:53 /user/guest/Batting.csv
  25. [root@sandbox lahman591-csv]# pig 1.pig
  26. WARNING: Use "yarn jar" to launch YARN applications.
  27. 15/12/11 19:31:44 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
  28. 15/12/11 19:31:44 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
  29. 15/12/11 19:31:44 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
  30. 2015-12-11 19:31:44,356 [main] INFO  org.apache.pig.Main - Apache Pig version 0.15.0.2.3.2.0-2950 (rexported) compiled Sep 30 2015, 19:39:20
  31. 2015-12-11 19:31:44,356 [main] INFO  org.apache.pig.Main - Logging error messages to: /root/lahman591-csv/pig_1449862304354.log
  32. 2015-12-11 19:31:45,137 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /root/.pigbootup not found
  33. 2015-12-11 19:31:45,257 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
  34. 2015-12-11 19:31:46,516 [main] WARN  org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_INT 1 time(s).
  35. 2015-12-11 19:31:46,516 [main] WARN  org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_DOUBLE 1 time(s).
  36. 2015-12-11 19:31:46,545 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: HASH_JOIN,GROUP_BY,FILTER
  37. 2015-12-11 19:31:46,593 [main] INFO  org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
  38. 2015-12-11 19:31:46,641 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
  39. 2015-12-11 19:31:46,785 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
  40. 2015-12-11 19:31:46,819 [main] INFO  org.apache.pig.backend.hadoop.executionengine.util.CombinerOptimizerUtil - Choosing to move algebraic foreach to combiner
  41. 2015-12-11 19:31:46,860 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler$LastInputStreamingOptimizer - Rewrite: POPackage->POForEach to POPackage(JoinPackager)
  42. 2015-12-11 19:31:46,873 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 3
  43. 2015-12-11 19:31:46,874 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 map-reduce splittees.
  44. 2015-12-11 19:31:46,874 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 out of total 3 MR operators.
  45. 2015-12-11 19:31:46,874 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 2
  46. 2015-12-11 19:31:47,388 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
  47. 2015-12-11 19:31:47,555 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
  48. 2015-12-11 19:31:47,752 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
  49. 2015-12-11 19:31:47,758 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
  50. 2015-12-11 19:31:47,760 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
  51. 2015-12-11 19:31:47,762 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
  52. 2015-12-11 19:31:47,777 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=6398886
  53. 2015-12-11 19:31:47,777 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
  54. 2015-12-11 19:31:47,777 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
  55. 2015-12-11 19:31:48,160 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/pig/pig-0.15.0.2.3.2.0-2950-core-h2.jar to DistributedCache through /tmp/temp-1331804897/tmp-86214663/pig-0.15.0.2.3.2.0-2950-core-h2.jar
  56. 2015-12-11 19:31:48,195 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp-1331804897/tmp507004575/automaton-1.11-8.jar
  57. 2015-12-11 19:31:48,230 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp-1331804897/tmp-2114352180/antlr-runtime-3.4.jar
  58. 2015-12-11 19:31:48,280 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/hadoop-mapreduce/joda-time-2.8.2.jar to DistributedCache through /tmp/temp-1331804897/tmp1114773941/joda-time-2.8.2.jar
  59. 2015-12-11 19:31:48,349 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up multi store job
  60. 2015-12-11 19:31:48,360 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
  61. 2015-12-11 19:31:48,360 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
  62. 2015-12-11 19:31:48,360 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
  63. 2015-12-11 19:31:48,471 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
  64. 2015-12-11 19:31:48,717 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
  65. 2015-12-11 19:31:48,717 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
  66. 2015-12-11 19:31:48,800 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:1.pig got an error while submitting
  67. org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  68.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  69.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  70.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  71.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  72.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  73.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  74.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  75.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  76.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  77.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  78.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  79.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  80.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  81.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  82.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  83.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  84.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  85.     at java.security.AccessController.doPrivileged(Native Method)
  86.     at javax.security.auth.Subject.doAs(Subject.java:415)
  87.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  88.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  89.  
  90.     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  91.     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  92.     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  93.     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  94.     at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  95.     at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  96.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3010)
  97.     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2978)
  98.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  99.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  100.     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  101.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
  102.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  103.     at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
  104.     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
  105.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
  106.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
  107.     at java.security.AccessController.doPrivileged(Native Method)
  108.     at javax.security.auth.Subject.doAs(Subject.java:415)
  109.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  110.     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
  111.     at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
  112.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  113.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  114.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  115.     at java.lang.reflect.Method.invoke(Method.java:606)
  116.     at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
  117.     at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
  118.     at java.lang.Thread.run(Thread.java:745)
  119.     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
  120. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  121.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  122.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  123.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  124.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  125.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  126.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  127.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  128.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  129.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  130.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  131.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  132.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  133.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  134.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  135.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  136.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  137.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  138.     at java.security.AccessController.doPrivileged(Native Method)
  139.     at javax.security.auth.Subject.doAs(Subject.java:415)
  140.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  141.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  142.  
  143.     at org.apache.hadoop.ipc.Client.call(Client.java:1427)
  144.     at org.apache.hadoop.ipc.Client.call(Client.java:1358)
  145.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  146.     at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
  147.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  148.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  149.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  150.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  151.     at java.lang.reflect.Method.invoke(Method.java:606)
  152.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
  153.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  154.     at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
  155.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3008)
  156.     ... 23 more
  157. 2015-12-11 19:31:48,975 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
  158. 2015-12-11 19:31:53,993 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
  159. 2015-12-11 19:31:53,993 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job null has failed! Stop running all dependent jobs
  160. 2015-12-11 19:31:53,993 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
  161. 2015-12-11 19:31:54,044 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
  162. 2015-12-11 19:31:54,047 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
  163.  
  164. HadoopVersion   PigVersion  UserId  StartedAt   FinishedAt  Features
  165. 2.7.1.2.3.2.0-2950  0.15.0.2.3.2.0-2950 root    2015-12-11 19:31:47 2015-12-11 19:31:54 HASH_JOIN,GROUP_BY,FILTER
  166.  
  167. Failed!
  168.  
  169. Failed Jobs:
  170. JobId   Alias   Feature Message Outputs
  171. N/A batting,grp_data,max_runs,raw_runs,runs MULTI_QUERY,COMBINER    Message: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  172.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  173.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  174.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  175.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  176.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  177.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  178.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  179.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  180.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  181.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  182.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  183.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  184.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  185.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  186.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  187.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  188.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  189.     at java.security.AccessController.doPrivileged(Native Method)
  190.     at javax.security.auth.Subject.doAs(Subject.java:415)
  191.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  192.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  193.  
  194.     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  195.     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  196.     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  197.     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  198.     at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  199.     at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  200.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3010)
  201.     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2978)
  202.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  203.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  204.     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  205.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
  206.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  207.     at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
  208.     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
  209.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
  210.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
  211.     at java.security.AccessController.doPrivileged(Native Method)
  212.     at javax.security.auth.Subject.doAs(Subject.java:415)
  213.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  214.     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
  215.     at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
  216.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  217.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  218.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  219.     at java.lang.reflect.Method.invoke(Method.java:606)
  220.     at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
  221.     at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
  222.     at java.lang.Thread.run(Thread.java:745)
  223.     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
  224. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  225.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  226.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  227.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  228.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  229.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  230.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  231.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  232.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  233.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  234.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  235.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  236.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  237.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  238.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  239.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  240.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  241.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  242.     at java.security.AccessController.doPrivileged(Native Method)
  243.     at javax.security.auth.Subject.doAs(Subject.java:415)
  244.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  245.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  246.  
  247.     at org.apache.hadoop.ipc.Client.call(Client.java:1427)
  248.     at org.apache.hadoop.ipc.Client.call(Client.java:1358)
  249.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  250.     at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
  251.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  252.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  253.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  254.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  255.     at java.lang.reflect.Method.invoke(Method.java:606)
  256.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
  257.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  258.     at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
  259.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3008)
  260.     ... 23 more
  261.    
  262.  
  263. Input(s):
  264. Failed to read data from "/user/guest/Batting.csv"
  265.  
  266. Output(s):
  267.  
  268. Counters:
  269. Total records written : 0
  270. Total bytes written : 0
  271. Spillable Memory Manager spill count : 0
  272. Total bags proactively spilled: 0
  273. Total records proactively spilled: 0
  274.  
  275. Job DAG:
  276. null    ->  null,
  277. null
  278.  
  279.  
  280. 2015-12-11 19:31:54,049 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
  281. 2015-12-11 19:31:54,052 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias join_data
  282. Details at logfile: /root/lahman591-csv/pig_1449862304354.log
  283. 2015-12-11 19:31:54,097 [main] INFO  org.apache.pig.Main - Pig script completed in 9 seconds and 968 milliseconds (9968 ms)
  284. [root@sandbox lahman591-csv]# hdfs dfs -ls /user/guest
  285. Found 1 items
  286. -rwxrwxrwx   3 root hadoop    6398886 2015-12-11 00:53 /user/guest/Batting.csv
  287. [root@sandbox lahman591-csv]# ls -l
  288. total 30496
  289. -rwxrwxrwx 1 root hadoop     454 2015-12-11 03:44 1.pig
  290. -rwxrwxrwx 1 root hadoop  195488 2011-11-29 04:55 AllstarFull.csv
  291. -rwxrwxrwx 1 root hadoop 5651119 2012-02-07 23:56 Appearances.csv
  292. -rwxrwxrwx 1 root hadoop    2273 2011-11-28 23:57 AwardsManagers.csv
  293. -rwxrwxrwx 1 root hadoop   97304 2011-11-29 00:15 AwardsPlayers.csv
  294. -rwxrwxrwx 1 root hadoop   16134 2011-11-29 05:25 AwardsShareManagers.csv
  295. -rwxrwxrwx 1 root hadoop  216987 2011-11-29 05:44 AwardsSharePlayers.csv
  296. -rwxrwxrwx 1 root hadoop 6398886 2014-09-05 00:05 Batting.csv
  297. -rwxrwxrwx 1 root hadoop  621765 2012-02-07 23:34 BattingPost.csv
  298. -rwxrwxrwx 1 root hadoop 8063747 2011-11-28 23:49 Fielding.csv
  299. -rwxrwxrwx 1 root hadoop  322538 2011-11-15 22:11 FieldingOF.csv
  300. -rwxrwxrwx 1 root hadoop  552230 2012-02-07 23:34 FieldingPost.csv
  301. -rwxrwxrwx 1 root hadoop  172984 2012-02-07 23:23 HallOfFame.csv
  302. -rwxrwxrwx 1 root hadoop  133114 2011-11-29 18:59 Managers.csv
  303. -rwxrwxrwx 1 root hadoop    4240 2011-11-15 22:11 ManagersHalf.csv
  304. -rwxrwxrwx 1 root hadoop 3024713 2012-02-07 23:34 Master.csv
  305. -rw-r--r-- 1 root hadoop    9126 2015-12-11 01:08 pig_1449796077165.log
  306. -rw-r--r-- 1 root hadoop    9126 2015-12-11 01:10 pig_1449796244442.log
  307. -rw-r--r-- 1 root hadoop    9126 2015-12-11 01:15 pig_1449796523062.log
  308. -rw-r--r-- 1 root hadoop    9126 2015-12-11 01:27 pig_1449797245424.log
  309. -rw-r--r-- 1 root hadoop    9126 2015-12-11 01:33 pig_1449797575651.log
  310. -rw-r--r-- 1 root hadoop    9126 2015-12-11 03:44 pig_1449805474981.log
  311. -rw-r--r-- 1 root hadoop       0 2015-12-11 07:03 pig_1449817401972.log
  312. -rw-r--r-- 1 root hadoop    9126 2015-12-11 19:16 pig_1449861377791.log
  313. -rw-r--r-- 1 root hadoop    9126 2015-12-11 19:23 pig_1449861813663.log
  314. -rw-r--r-- 1 root hadoop    9126 2015-12-11 19:26 pig_1449861996576.log
  315. -rw-r--r-- 1 root root      9126 2015-12-11 19:31 pig_1449862304354.log
  316. -rwxrwxrwx 1 root hadoop 3543194 2012-02-06 19:05 Pitching.csv
  317. -rwxrwxrwx 1 root hadoop  367382 2012-02-07 23:46 PitchingPost.csv
  318. -rwxrwxrwx 1 root hadoop   30363 2012-02-08 00:04 readme59.txt
  319. -rwxrwxrwx 1 root hadoop  842057 2011-11-15 22:11 Salaries.csv
  320. -rwxrwxrwx 1 root hadoop   50365 2011-11-15 22:11 Schools.csv
  321. -rwxrwxrwx 1 root hadoop  205354 2011-11-15 22:11 SchoolsPlayers.csv
  322. -rwxrwxrwx 1 root hadoop    8088 2012-02-07 23:34 SeriesPost.csv
  323. -rwxrwxrwx 1 root hadoop  524526 2012-02-03 23:50 Teams.csv
  324. -rwxrwxrwx 1 root hadoop    4111 2011-11-15 22:11 TeamsFranchises.csv
  325. -rwxrwxrwx 1 root hadoop    2149 2011-11-15 22:11 TeamsHalf.csv
  326. [root@sandbox lahman591-csv]#
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement