Advertisement
Guest User

Untitled

a guest
Sep 27th, 2017
72
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 20.68 KB | None | 0 0
  1. starting org.apache.spark.deploy.history.HistoryServer, logging to /opt/spark-2.2.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-die132.out
  2. SLF4J: Class path contains multiple SLF4J bindings.
  3. SLF4J: Found binding in [jar:file:/opt/spark-2.2.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  4. SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  5. SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  6. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  7. 17/09/27 16:42:28 INFO spark.SparkContext: Running Spark version 2.2.0
  8. 17/09/27 16:42:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
  9. 17/09/27 16:42:29 WARN spark.SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
  10. 17/09/27 16:42:29 INFO spark.SparkContext: Submitted application: CeVoR-Batch
  11. 17/09/27 16:42:29 INFO spark.SecurityManager: Changing view acls to: root
  12. 17/09/27 16:42:29 INFO spark.SecurityManager: Changing modify acls to: root
  13. 17/09/27 16:42:29 INFO spark.SecurityManager: Changing view acls groups to:
  14. 17/09/27 16:42:29 INFO spark.SecurityManager: Changing modify acls groups to:
  15. 17/09/27 16:42:29 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
  16. 17/09/27 16:42:29 INFO util.Utils: Successfully started service 'sparkDriver' on port 50182.
  17. 17/09/27 16:42:29 INFO spark.SparkEnv: Registering MapOutputTracker
  18. 17/09/27 16:42:29 INFO spark.SparkEnv: Registering BlockManagerMaster
  19. 17/09/27 16:42:29 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
  20. 17/09/27 16:42:29 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
  21. 17/09/27 16:42:29 INFO storage.DiskBlockManager: Created local directory at /var/tmp/blockmgr-ad98f946-3ca5-4450-8fe3-7040d15c8cd6
  22. 17/09/27 16:42:29 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
  23. 17/09/27 16:42:29 INFO spark.SparkEnv: Registering OutputCommitCoordinator
  24. 17/09/27 16:42:29 INFO util.log: Logging initialized @2444ms
  25. 17/09/27 16:42:29 INFO server.Server: jetty-9.3.z-SNAPSHOT
  26. 17/09/27 16:42:29 INFO server.Server: Started @2537ms
  27. 17/09/27 16:42:29 INFO server.AbstractConnector: Started ServerConnector@4c24c55a{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
  28. 17/09/27 16:42:29 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
  29. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49bd54f7{/jobs,null,AVAILABLE,@Spark}
  30. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@39fc6b2c{/jobs/json,null,AVAILABLE,@Spark}
  31. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3ee39da0{/jobs/job,null,AVAILABLE,@Spark}
  32. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e27d72f{/jobs/job/json,null,AVAILABLE,@Spark}
  33. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4837595f{/stages,null,AVAILABLE,@Spark}
  34. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b718392{/stages/json,null,AVAILABLE,@Spark}
  35. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1f2d2181{/stages/stage,null,AVAILABLE,@Spark}
  36. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7668d560{/stages/stage/json,null,AVAILABLE,@Spark}
  37. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@126be319{/stages/pool,null,AVAILABLE,@Spark}
  38. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c371e13{/stages/pool/json,null,AVAILABLE,@Spark}
  39. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e34c607{/storage,null,AVAILABLE,@Spark}
  40. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@36b6964d{/storage/json,null,AVAILABLE,@Spark}
  41. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@9257031{/storage/rdd,null,AVAILABLE,@Spark}
  42. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7726e185{/storage/rdd/json,null,AVAILABLE,@Spark}
  43. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@282308c3{/environment,null,AVAILABLE,@Spark}
  44. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1db0ec27{/environment/json,null,AVAILABLE,@Spark}
  45. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@d4ab71a{/executors,null,AVAILABLE,@Spark}
  46. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1af05b03{/executors/json,null,AVAILABLE,@Spark}
  47. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1ad777f{/executors/threadDump,null,AVAILABLE,@Spark}
  48. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@438bad7c{/executors/threadDump/json,null,AVAILABLE,@Spark}
  49. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4fdf8f12{/static,null,AVAILABLE,@Spark}
  50. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b582111{/,null,AVAILABLE,@Spark}
  51. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e8823d2{/api,null,AVAILABLE,@Spark}
  52. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6105f8a3{/jobs/job/kill,null,AVAILABLE,@Spark}
  53. 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77e2a6e2{/stages/stage/kill,null,AVAILABLE,@Spark}
  54. 17/09/27 16:42:29 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.10.154.132:4040
  55. 17/09/27 16:42:29 INFO spark.SparkContext: Added JAR /tmp/jar/postgresql-42.1.4.jar at spark://10.10.154.132:50182/jars/postgresql-42.1.4.jar with timestamp 1506523349975
  56. 17/09/27 16:42:29 INFO spark.SparkContext: Added JAR /application/cevor-spark-batch-jar-with-dependencies.jar at spark://10.10.154.132:50182/jars/cevor-spark-batch-jar-with-dependencies.jar with timestamp 1506523349976
  57. 17/09/27 16:42:30 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://die132:7077...
  58. 17/09/27 16:42:30 INFO client.TransportClientFactory: Successfully created connection to die132/10.10.154.132:7077 after 38 ms (0 ms spent in bootstraps)
  59. 17/09/27 16:42:30 INFO cluster.StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170927164230-0024
  60. 17/09/27 16:42:30 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55963.
  61. 17/09/27 16:42:30 INFO netty.NettyBlockTransferService: Server created on 10.10.154.132:55963dependencies.jar at spark://10.10.154.132:56810/jars/cevor-spark-batch-jar-with-dependencies.jar with timestamp 1506523302276
  62. 17/09/27 16:42:30 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
  63. 17/09/27 16:42:30 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.10.154.132, 55963, None) (0 ms spent in bootstraps)
  64. 17/09/27 16:42:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.10.154.132:55963 with 366.3 MB RAM, BlockManagerId(driver, 10.10.154.132, 55963, None)
  65. 17/09/27 16:42:30 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.10.154.132, 55963, None) port 32896.
  66. 17/09/27 16:42:30 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.10.154.132, 55963, None)
  67. 17/09/27 16:42:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56da52a7{/metrics/json,null,AVAILABLE,@Spark}icy
  68. 17/09/27 16:42:31 INFO scheduler.EventLoggingListener: Logging events to file:/tmp/spark-events/app-20170927164230-0024, None)
  69. 17/09/27 16:42:31 INFO cluster.StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0.132, 32896, None)
  70. 17/09/27 16:42:31 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.2896, None)
  71. 17/09/27 16:42:31 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/LAMPIRIS/dobl/spark-warehouse/').
  72. 17/09/27 16:42:31 INFO internal.SharedState: Warehouse path is 'file:/home/LAMPIRIS/dobl/spark-warehouse/'.,null,AVAILABLE,@Spark}
  73. 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55a609dd{/SQL,null,AVAILABLE,@Spark}
  74. 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d0753c9{/SQL/json,null,AVAILABLE,@Spark}nRegisteredResourcesRatio: 0.0
  75. 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d27d9d{/SQL/execution,null,AVAILABLE,@Spark}
  76. 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20411320{/SQL/execution/json,null,AVAILABLE,@Spark}e:/home/LAMPIRIS/dobl/spark-warehouse/').
  77. 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@255e5e2e{/static/sql,null,AVAILABLE,@Spark}
  78. 17/09/27 16:42:31 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpointQL,null,AVAILABLE,@Spark}
  79. Hadoop path : hdfs://die132:9000/cevor.propertiesarted o.s.j.s.ServletContextHandler@4afd21c6{/SQL/json,null,AVAILABLE,@Spark}
  80. Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
  81. fs DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_1106011885_1, ugi=root (auth:SIMPLE)]]78a0{/SQL/execution/json,null,AVAILABLE,@Spark}
  82. cevor.volume.database.url = jdbc:postgresql://die110:5432/cevorServletContextHandler@58b71ceb{/static/sql,null,AVAILABLE,@Spark}
  83. cevor.volume.database.user = cevorapireCoordinatorRef: Registered StateStoreCoordinator endpoint
  84. cevor.volume.directory = volumes/cevor.properties
  85. group.id = cevor.groupfault.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
  86. cevor.volume.database.driver = org.postgresql.Driver30204227_1, ugi=root (auth:SIMPLE)]]
  87. cevor.reference.directory = references/esql://die110:5432/cevor
  88. gtsi.reader.service.address = http://p2diel2etraps1:8080/gtsi/out
  89. bootstrap.servers = die131:9092s/
  90. cevor.reference.filename = references.parquet
  91. cevor.volume.filename = volumes.parquetgresql.Driver
  92. cevor.volume.database.password = ##########
  93. cevor.hdfs.master = hdfs://die132:9000/diel2etraps1:8080/gtsi/out
  94. gtsi.writer.service.address = http://p2diel2etraps1:8080/gtsi/in
  95. Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
  96. cevor.voat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  97. cevor.voat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  98. cevor.hdat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  99. gtsi.wriat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  100. Exceptioat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)r=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
  101. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712):319)
  102. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695))
  103. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2515)r.java:213)
  104. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)hecker.java:190)
  105. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)728)
  106. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:624)
  107. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
  108. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  109. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  110. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982).startFile(FSNamesystem.java:2334)
  111. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)ate(NameNodeRpcServer.java:624)
  112. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)erSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
  113. at java.security.AccessController.doPrivileged(Native Method)olProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  114. at javax.security.auth.Subject.doAs(Subject.java:422)toBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  115. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
  116. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)9)
  117. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
  118. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  119. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  120. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  121. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  122. at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
  123. at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
  124. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1653)java:62)
  125. at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)legatingConstructorAccessorImpl.java:45)
  126. at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)23)
  127. at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
  128. at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
  129. at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81))
  130. at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
  131. at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
  132. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)ibutedFileSystem.java:448)
  133. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)ibutedFileSystem.java:444)
  134. at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:854)temLinkResolver.java:81)
  135. at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1154)System.java:459)
  136. at be.lampiris.el2.cevor.CevorSlicingBatch.lock(CevorSlicingBatch.java:105)m.java:387)
  137. at be.lampiris.el2.cevor.CevorSlicingBatch.main(CevorSlicingBatch.java:94)
  138. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  139. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  140. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  141. at java.lang.reflect.Method.invoke(Method.java:498)orSlicingBatch.java:105)
  142. at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
  143. at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
  144. at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)pl.java:62)
  145. at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)AccessorImpl.java:43)
  146. at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
  147. Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
  148. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
  149. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
  150. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
  151. at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
  152. Caused bat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728) Permission denied: user=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
  153. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712):319)
  154. at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695))
  155. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2515)r.java:213)
  156. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)hecker.java:190)
  157. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)728)
  158. at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:624)
  159. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
  160. at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  161. at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  162. at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982).startFile(FSNamesystem.java:2334)
  163. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)ate(NameNodeRpcServer.java:624)
  164. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)erSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
  165. at java.security.AccessController.doPrivileged(Native Method)olProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
  166. at javax.security.auth.Subject.doAs(Subject.java:422)toBufRpcInvoker.call(ProtobufRpcEngine.java:616)
  167. at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
  168. at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)9)
  169. at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
  170. at org.apache.hadoop.ipc.Client.call(Client.java:1475)Method)
  171. at org.apache.hadoop.ipc.Client.call(Client.java:1412)
  172. at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)8)
  173. at com.sun.proxy.$Proxy16.create(Unknown Source)er.java:2043)
  174. at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
  175. at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  176. at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  177. at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  178. at java.lang.reflect.Method.invoke(Method.java:498)
  179. at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)colTranslatorPB.java:296)
  180. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  181. at com.sun.proxy.$Proxy17.create(Unknown Source)ativeMethodAccessorImpl.java:62)
  182. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1648))
  183. ... 22 moreg.reflect.Method.invoke(Method.java:498)
  184. root@die132:/home/LAMPIRIS/dobl# etry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
  185. at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
  186. at com.sun.proxy.$Proxy17.create(Unknown Source)
  187. at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1648)
  188. ... 22 more
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement