Advertisement
birdmw

Failed to read data from "/user/guest/Batting.csv"

Dec 11th, 2015
288
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
Bash 24.80 KB | None | 0 0
  1. [root@sandbox ~]# ls
  2. anaconda-ks.cfg  install.log.syslog  __MACOSX         start_hbase.sh
  3. build.out        lahman591-csv       sandbox.info     start_solr.sh
  4. install.log      lahman591-csv.zip   start_ambari.sh  stop_solr.sh
  5. [root@sandbox ~]# cd lahman591-csv
  6. [root@sandbox lahman591-csv]# ls
  7. 1.pig                    FieldingPost.csv       PitchingPost.csv
  8. AllstarFull.csv          HallOfFame.csv         readme59.txt
  9. Appearances.csv          Managers.csv           Salaries.csv
  10. AwardsManagers.csv       ManagersHalf.csv       Schools.csv
  11. AwardsPlayers.csv        Master.csv             SchoolsPlayers.csv
  12. AwardsShareManagers.csv  pig_1449796077165.log  SeriesPost.csv
  13. AwardsSharePlayers.csv   pig_1449796244442.log  Teams.csv
  14. Batting.csv              pig_1449796523062.log  TeamsFranchises.csv
  15. BattingPost.csv          pig_1449797245424.log  TeamsHalf.csv
  16. Fielding.csv             pig_1449797575651.log
  17. FieldingOF.csv           Pitching.csv
  18. [root@sandbox lahman591-csv]# hadoop fs -ls /user/guest/
  19. Found 1 items
  20. -rwxrwxrwx   3 root guest    6398886 2015-12-11 00:53 /user/guest/Batting.csv
  21. [root@sandbox lahman591-csv]# pig 1.pig
  22. WARNING: Use "yarn jar" to launch YARN applications.
  23. 15/12/11 03:44:34 INFO pig.ExecTypeProvider: Trying ExecType : LOCAL
  24. 15/12/11 03:44:34 INFO pig.ExecTypeProvider: Trying ExecType : MAPREDUCE
  25. 15/12/11 03:44:34 INFO pig.ExecTypeProvider: Picked MAPREDUCE as the ExecType
  26. 2015-12-11 03:44:34,983 [main] INFO  org.apache.pig.Main - Apache Pig version 0.15.0.2.3.2.0-2950 (rexported) compiled Sep 30 2015, 19:39:20
  27. 2015-12-11 03:44:34,983 [main] INFO  org.apache.pig.Main - Logging error messages to: /root/lahman591-csv/pig_1449805474981.log
  28. 2015-12-11 03:44:35,804 [main] INFO  org.apache.pig.impl.util.Utils - Default bootup file /root/.pigbootup not found
  29. 2015-12-11 03:44:35,931 [main] INFO  org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://sandbox.hortonworks.com:8020
  30. 2015-12-11 03:44:37,709 [main] WARN  org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_DOUBLE 1 time(s).
  31. 2015-12-11 03:44:37,709 [main] WARN  org.apache.pig.newplan.BaseOperatorPlan - Encountered Warning IMPLICIT_CAST_TO_INT 1 time(s).
  32. 2015-12-11 03:44:37,735 [main] INFO  org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: HASH_JOIN,GROUP_BY,FILTER
  33. 2015-12-11 03:44:37,790 [main] INFO  org.apache.pig.data.SchemaTupleBackend - Key [pig.schematuple] was not set... will not generate code.
  34. 2015-12-11 03:44:37,855 [main] INFO  org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=[AddForEach, ColumnMapKeyPrune, ConstantCalculator, GroupByConstParallelSetter, LimitOptimizer, LoadTypeCastInserter, MergeFilter, MergeForEach, PartitionFilterOptimizer, PredicatePushdownOptimizer, PushDownForEachFlatten, PushUpFilter, SplitFilter, StreamTypeCastInserter]}
  35. 2015-12-11 03:44:38,100 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
  36. 2015-12-11 03:44:38,147 [main] INFO  org.apache.pig.backend.hadoop.executionengine.util.CombinerOptimizerUtil - Choosing to move algebraic foreach to combiner
  37. 2015-12-11 03:44:38,196 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler$LastInputStreamingOptimizer - Rewrite: POPackage->POForEach to POPackage(JoinPackager)
  38. 2015-12-11 03:44:38,210 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 3
  39. 2015-12-11 03:44:38,211 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 map-reduce splittees.
  40. 2015-12-11 03:44:38,211 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - Merged 1 out of total 3 MR operators.
  41. 2015-12-11 03:44:38,211 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 2
  42. 2015-12-11 03:44:38,696 [main] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
  43. 2015-12-11 03:44:38,855 [main] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
  44. 2015-12-11 03:44:39,125 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.MRScriptState - Pig script settings are added to the job
  45. 2015-12-11 03:44:39,131 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
  46. 2015-12-11 03:44:39,134 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Reduce phase detected, estimating # of required reducers.
  47. 2015-12-11 03:44:39,135 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Using reducer estimator: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator
  48. 2015-12-11 03:44:39,144 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.InputSizeReducerEstimator - BytesPerReducer=1000000000 maxReducers=999 totalInputFileSize=6398886
  49. 2015-12-11 03:44:39,144 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting Parallelism to 1
  50. 2015-12-11 03:44:39,144 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - This job cannot be converted run in-process
  51. 2015-12-11 03:44:39,626 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/pig/pig-0.15.0.2.3.2.0-2950-core-h2.jar to DistributedCache through /tmp/temp-1568975173/tmp2076390742/pig-0.15.0.2.3.2.0-2950-core-h2.jar
  52. 2015-12-11 03:44:39,668 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/pig/lib/automaton-1.11-8.jar to DistributedCache through /tmp/temp-1568975173/tmp1775023696/automaton-1.11-8.jar
  53. 2015-12-11 03:44:39,708 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/pig/lib/antlr-runtime-3.4.jar to DistributedCache through /tmp/temp-1568975173/tmp237569546/antlr-runtime-3.4.jar
  54. 2015-12-11 03:44:39,754 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Added jar file:/usr/hdp/2.3.2.0-2950/hadoop-mapreduce/joda-time-2.8.2.jar to DistributedCache through /tmp/temp-1568975173/tmp-948758207/joda-time-2.8.2.jar
  55. 2015-12-11 03:44:39,822 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up multi store job
  56. 2015-12-11 03:44:39,837 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Key [pig.schematuple] is false, will not generate code.
  57. 2015-12-11 03:44:39,837 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cacche
  58. 2015-12-11 03:44:39,837 [main] INFO  org.apache.pig.data.SchemaTupleFrontend - Setting key [pig.schematuple.classes] with classes to deserialize []
  59. 2015-12-11 03:44:39,994 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
  60. 2015-12-11 03:44:40,244 [JobControl] INFO  org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl - Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
  61. 2015-12-11 03:44:40,247 [JobControl] INFO  org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
  62. 2015-12-11 03:44:40,356 [JobControl] INFO  org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob - PigLatin:1.pig got an error while submitting
  63. org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  64.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  65.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  66.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  67.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  68.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  69.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  70.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  71.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  72.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  73.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  74.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  75.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  76.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  77.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  78.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  79.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  80.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  81.     at java.security.AccessController.doPrivileged(Native Method)
  82.     at javax.security.auth.Subject.doAs(Subject.java:415)
  83.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  84.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  85.  
  86.     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  87.     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  88.     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  89.     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  90.     at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  91.     at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  92.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3010)
  93.     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2978)
  94.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  95.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  96.     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  97.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
  98.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  99.     at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
  100.     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
  101.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
  102.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
  103.     at java.security.AccessController.doPrivileged(Native Method)
  104.     at javax.security.auth.Subject.doAs(Subject.java:415)
  105.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  106.     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
  107.     at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
  108.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  109.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  110.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  111.     at java.lang.reflect.Method.invoke(Method.java:606)
  112.     at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
  113.     at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
  114.     at java.lang.Thread.run(Thread.java:745)
  115.     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
  116. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  117.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  118.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  119.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  120.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  121.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  122.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  123.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  124.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  125.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  126.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  127.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  128.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  129.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  130.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  131.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  132.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  133.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  134.     at java.security.AccessController.doPrivileged(Native Method)
  135.     at javax.security.auth.Subject.doAs(Subject.java:415)
  136.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  137.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  138.  
  139.     at org.apache.hadoop.ipc.Client.call(Client.java:1427)
  140.     at org.apache.hadoop.ipc.Client.call(Client.java:1358)
  141.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  142.     at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
  143.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  144.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  145.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  146.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  147.     at java.lang.reflect.Method.invoke(Method.java:606)
  148.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
  149.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  150.     at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
  151.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3008)
  152.     ... 23 more
  153. 2015-12-11 03:44:40,498 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
  154. 2015-12-11 03:44:45,543 [main] WARN  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
  155. 2015-12-11 03:44:45,543 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job null has failed! Stop running all dependent jobs
  156. 2015-12-11 03:44:45,543 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
  157. 2015-12-11 03:44:45,648 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
  158. 2015-12-11 03:44:45,651 [main] INFO  org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
  159.  
  160. HadoopVersion   PigVersion  UserId  StartedAt   FinishedAt  Features
  161. 2.7.1.2.3.2.0-2950  0.15.0.2.3.2.0-2950 root    2015-12-11 03:44:39 2015-12-11 03:44:45 HASH_JOIN,GROUP_BY,FILTER
  162.  
  163. Failed!
  164.  
  165. Failed Jobs:
  166. JobId   Alias   Feature Message Outputs
  167. N/A batting,grp_data,max_runs,raw_runs,runs MULTI_QUERY,COMBINER    Message: org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  168.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  169.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  170.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  171.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  172.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  173.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  174.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  175.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  176.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  177.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  178.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  179.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  180.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  181.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  182.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  183.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  184.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  185.     at java.security.AccessController.doPrivileged(Native Method)
  186.     at javax.security.auth.Subject.doAs(Subject.java:415)
  187.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  188.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  189.  
  190.     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  191.     at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
  192.     at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  193.     at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
  194.     at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  195.     at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  196.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3010)
  197.     at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2978)
  198.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1047)
  199.     at org.apache.hadoop.hdfs.DistributedFileSystem$21.doCall(DistributedFileSystem.java:1043)
  200.     at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
  201.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1043)
  202.     at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1036)
  203.     at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:133)
  204.     at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:144)
  205.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
  206.     at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
  207.     at java.security.AccessController.doPrivileged(Native Method)
  208.     at javax.security.auth.Subject.doAs(Subject.java:415)
  209.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  210.     at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
  211.     at org.apache.hadoop.mapreduce.lib.jobcontrol.ControlledJob.submit(ControlledJob.java:335)
  212.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  213.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  214.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  215.     at java.lang.reflect.Method.invoke(Method.java:606)
  216.     at org.apache.pig.backend.hadoop23.PigJobControl.submit(PigJobControl.java:128)
  217.     at org.apache.pig.backend.hadoop23.PigJobControl.run(PigJobControl.java:194)
  218.     at java.lang.Thread.run(Thread.java:745)
  219.     at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher$1.run(MapReduceLauncher.java:276)
  220. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/user/root/.staging":hdfs:hdfs:drwxr-xr-x
  221.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  222.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  223.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  224.     at org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer$RangerAccessControlEnforcer.checkPermission(RangerHdfsAuthorizer.java:300)
  225.     at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  226.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1771)
  227.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1755)
  228.     at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1738)
  229.     at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:71)
  230.     at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3896)
  231.     at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:984)
  232.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:622)
  233.     at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  234.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  235.     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
  236.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2137)
  237.     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2133)
  238.     at java.security.AccessController.doPrivileged(Native Method)
  239.     at javax.security.auth.Subject.doAs(Subject.java:415)
  240.     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
  241.     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2131)
  242.  
  243.     at org.apache.hadoop.ipc.Client.call(Client.java:1427)
  244.     at org.apache.hadoop.ipc.Client.call(Client.java:1358)
  245.     at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
  246.     at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)
  247.     at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:558)
  248.     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  249.     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
  250.     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  251.     at java.lang.reflect.Method.invoke(Method.java:606)
  252.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
  253.     at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  254.     at com.sun.proxy.$Proxy12.mkdirs(Unknown Source)
  255.     at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:3008)
  256.     ... 23 more
  257.    
  258.  
  259. Input(s):
  260. Failed to read data from "/user/guest/Batting.csv"
  261.  
  262. Output(s):
  263.  
  264. Counters:
  265. Total records written : 0
  266. Total bytes written : 0
  267. Spillable Memory Manager spill count : 0
  268. Total bags proactively spilled: 0
  269. Total records proactively spilled: 0
  270.  
  271. Job DAG:
  272. null    ->  null,
  273. null
  274.  
  275.  
  276. 2015-12-11 03:44:45,653 [main] INFO  org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
  277. 2015-12-11 03:44:45,655 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias join_data
  278. Details at logfile: /root/lahman591-csv/pig_1449805474981.log
  279. 2015-12-11 03:44:45,735 [main] INFO  org.apache.pig.Main - Pig script completed in 10 seconds and 955 milliseconds (10955 ms)
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement