Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- starting org.apache.spark.deploy.history.HistoryServer, logging to /opt/spark-2.2.0-bin-hadoop2.7/logs/spark-root-org.apache.spark.deploy.history.HistoryServer-1-die132.out
- SLF4J: Class path contains multiple SLF4J bindings.
- SLF4J: Found binding in [jar:file:/opt/spark-2.2.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
- SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
- SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
- 17/09/27 16:42:28 INFO spark.SparkContext: Running Spark version 2.2.0
- 17/09/27 16:42:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- 17/09/27 16:42:29 WARN spark.SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
- 17/09/27 16:42:29 INFO spark.SparkContext: Submitted application: CeVoR-Batch
- 17/09/27 16:42:29 INFO spark.SecurityManager: Changing view acls to: root
- 17/09/27 16:42:29 INFO spark.SecurityManager: Changing modify acls to: root
- 17/09/27 16:42:29 INFO spark.SecurityManager: Changing view acls groups to:
- 17/09/27 16:42:29 INFO spark.SecurityManager: Changing modify acls groups to:
- 17/09/27 16:42:29 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set()
- 17/09/27 16:42:29 INFO util.Utils: Successfully started service 'sparkDriver' on port 50182.
- 17/09/27 16:42:29 INFO spark.SparkEnv: Registering MapOutputTracker
- 17/09/27 16:42:29 INFO spark.SparkEnv: Registering BlockManagerMaster
- 17/09/27 16:42:29 INFO storage.BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
- 17/09/27 16:42:29 INFO storage.BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
- 17/09/27 16:42:29 INFO storage.DiskBlockManager: Created local directory at /var/tmp/blockmgr-ad98f946-3ca5-4450-8fe3-7040d15c8cd6
- 17/09/27 16:42:29 INFO memory.MemoryStore: MemoryStore started with capacity 366.3 MB
- 17/09/27 16:42:29 INFO spark.SparkEnv: Registering OutputCommitCoordinator
- 17/09/27 16:42:29 INFO util.log: Logging initialized @2444ms
- 17/09/27 16:42:29 INFO server.Server: jetty-9.3.z-SNAPSHOT
- 17/09/27 16:42:29 INFO server.Server: Started @2537ms
- 17/09/27 16:42:29 INFO server.AbstractConnector: Started ServerConnector@4c24c55a{HTTP/1.1,[http/1.1]}{0.0.0.0:4040}
- 17/09/27 16:42:29 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@49bd54f7{/jobs,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@39fc6b2c{/jobs/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3ee39da0{/jobs/job,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@2e27d72f{/jobs/job/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4837595f{/stages,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b718392{/stages/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1f2d2181{/stages/stage,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7668d560{/stages/stage/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@126be319{/stages/pool,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@5c371e13{/stages/pool/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e34c607{/storage,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@36b6964d{/storage/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@9257031{/storage/rdd,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@7726e185{/storage/rdd/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@282308c3{/environment,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1db0ec27{/environment/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@d4ab71a{/executors,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1af05b03{/executors/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1ad777f{/executors/threadDump,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@438bad7c{/executors/threadDump/json,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4fdf8f12{/static,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3b582111{/,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@1e8823d2{/api,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@6105f8a3{/jobs/job/kill,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@77e2a6e2{/stages/stage/kill,null,AVAILABLE,@Spark}
- 17/09/27 16:42:29 INFO ui.SparkUI: Bound SparkUI to 0.0.0.0, and started at http://10.10.154.132:4040
- 17/09/27 16:42:29 INFO spark.SparkContext: Added JAR /tmp/jar/postgresql-42.1.4.jar at spark://10.10.154.132:50182/jars/postgresql-42.1.4.jar with timestamp 1506523349975
- 17/09/27 16:42:29 INFO spark.SparkContext: Added JAR /application/cevor-spark-batch-jar-with-dependencies.jar at spark://10.10.154.132:50182/jars/cevor-spark-batch-jar-with-dependencies.jar with timestamp 1506523349976
- 17/09/27 16:42:30 INFO client.StandaloneAppClient$ClientEndpoint: Connecting to master spark://die132:7077...
- 17/09/27 16:42:30 INFO client.TransportClientFactory: Successfully created connection to die132/10.10.154.132:7077 after 38 ms (0 ms spent in bootstraps)
- 17/09/27 16:42:30 INFO cluster.StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20170927164230-0024
- 17/09/27 16:42:30 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55963.
- 17/09/27 16:42:30 INFO netty.NettyBlockTransferService: Server created on 10.10.154.132:55963dependencies.jar at spark://10.10.154.132:56810/jars/cevor-spark-batch-jar-with-dependencies.jar with timestamp 1506523302276
- 17/09/27 16:42:30 INFO storage.BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
- 17/09/27 16:42:30 INFO storage.BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 10.10.154.132, 55963, None) (0 ms spent in bootstraps)
- 17/09/27 16:42:30 INFO storage.BlockManagerMasterEndpoint: Registering block manager 10.10.154.132:55963 with 366.3 MB RAM, BlockManagerId(driver, 10.10.154.132, 55963, None)
- 17/09/27 16:42:30 INFO storage.BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.10.154.132, 55963, None) port 32896.
- 17/09/27 16:42:30 INFO storage.BlockManager: Initialized BlockManager: BlockManagerId(driver, 10.10.154.132, 55963, None)
- 17/09/27 16:42:30 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@56da52a7{/metrics/json,null,AVAILABLE,@Spark}icy
- 17/09/27 16:42:31 INFO scheduler.EventLoggingListener: Logging events to file:/tmp/spark-events/app-20170927164230-0024, None)
- 17/09/27 16:42:31 INFO cluster.StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0.132, 32896, None)
- 17/09/27 16:42:31 WARN spark.SparkContext: Using an existing SparkContext; some configuration may not take effect.2896, None)
- 17/09/27 16:42:31 INFO internal.SharedState: Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/home/LAMPIRIS/dobl/spark-warehouse/').
- 17/09/27 16:42:31 INFO internal.SharedState: Warehouse path is 'file:/home/LAMPIRIS/dobl/spark-warehouse/'.,null,AVAILABLE,@Spark}
- 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@55a609dd{/SQL,null,AVAILABLE,@Spark}
- 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d0753c9{/SQL/json,null,AVAILABLE,@Spark}nRegisteredResourcesRatio: 0.0
- 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@4d27d9d{/SQL/execution,null,AVAILABLE,@Spark}
- 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@20411320{/SQL/execution/json,null,AVAILABLE,@Spark}e:/home/LAMPIRIS/dobl/spark-warehouse/').
- 17/09/27 16:42:31 INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@255e5e2e{/static/sql,null,AVAILABLE,@Spark}
- 17/09/27 16:42:31 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpointQL,null,AVAILABLE,@Spark}
- Hadoop path : hdfs://die132:9000/cevor.propertiesarted o.s.j.s.ServletContextHandler@4afd21c6{/SQL/json,null,AVAILABLE,@Spark}
- Configuration: core-default.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
- fs DFS[DFSClient[clientName=DFSClient_NONMAPREDUCE_1106011885_1, ugi=root (auth:SIMPLE)]]78a0{/SQL/execution/json,null,AVAILABLE,@Spark}
- cevor.volume.database.url = jdbc:postgresql://die110:5432/cevorServletContextHandler@58b71ceb{/static/sql,null,AVAILABLE,@Spark}
- cevor.volume.database.user = cevorapireCoordinatorRef: Registered StateStoreCoordinator endpoint
- cevor.volume.directory = volumes/cevor.properties
- group.id = cevor.groupfault.xml, core-site.xml, mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml, hdfs-site.xml
- cevor.volume.database.driver = org.postgresql.Driver30204227_1, ugi=root (auth:SIMPLE)]]
- cevor.reference.directory = references/esql://die110:5432/cevor
- gtsi.reader.service.address = http://p2diel2etraps1:8080/gtsi/out
- bootstrap.servers = die131:9092s/
- cevor.reference.filename = references.parquet
- cevor.volume.filename = volumes.parquetgresql.Driver
- cevor.volume.database.password = ##########
- cevor.hdfs.master = hdfs://die132:9000/diel2etraps1:8080/gtsi/out
- gtsi.writer.service.address = http://p2diel2etraps1:8080/gtsi/in
- Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: user=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
- cevor.voat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
- cevor.voat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
- cevor.hdat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
- gtsi.wriat org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
- Exceptioat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728)r=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712):319)
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695))
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2515)r.java:213)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)hecker.java:190)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)728)
- at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:624)
- at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
- at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
- at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982).startFile(FSNamesystem.java:2334)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)ate(NameNodeRpcServer.java:624)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)erSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
- at java.security.AccessController.doPrivileged(Native Method)olProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
- at javax.security.auth.Subject.doAs(Subject.java:422)toBufRpcInvoker.call(ProtobufRpcEngine.java:616)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)9)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
- at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
- at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
- at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
- at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
- at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
- at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
- at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1653)java:62)
- at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1689)legatingConstructorAccessorImpl.java:45)
- at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1624)23)
- at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
- at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
- at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81))
- at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:459)
- at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:911)ibutedFileSystem.java:448)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:892)ibutedFileSystem.java:444)
- at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:854)temLinkResolver.java:81)
- at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1154)System.java:459)
- at be.lampiris.el2.cevor.CevorSlicingBatch.lock(CevorSlicingBatch.java:105)m.java:387)
- at be.lampiris.el2.cevor.CevorSlicingBatch.main(CevorSlicingBatch.java:94)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)orSlicingBatch.java:105)
- at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:755)
- at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
- at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)pl.java:62)
- at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)AccessorImpl.java:43)
- at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
- Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:190)
- Caused bat org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1728) Permission denied: user=root, access=WRITE, inode="/CevorSlicingBatch":clusterlauncher:supergroup:drwxr-xr-x
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1712):319)
- at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1695))
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:2515)r.java:213)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2450)hecker.java:190)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2334)728)
- at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:624)
- at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
- at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
- at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982).startFile(FSNamesystem.java:2334)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)ate(NameNodeRpcServer.java:624)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)erSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:397)
- at java.security.AccessController.doPrivileged(Native Method)olProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
- at javax.security.auth.Subject.doAs(Subject.java:422)toBufRpcInvoker.call(ProtobufRpcEngine.java:616)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)9)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
- at org.apache.hadoop.ipc.Client.call(Client.java:1475)Method)
- at org.apache.hadoop.ipc.Client.call(Client.java:1412)
- at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)8)
- at com.sun.proxy.$Proxy16.create(Unknown Source)er.java:2043)
- at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:296)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
- at java.lang.reflect.Method.invoke(Method.java:498)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)colTranslatorPB.java:296)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
- at com.sun.proxy.$Proxy17.create(Unknown Source)ativeMethodAccessorImpl.java:62)
- at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1648))
- ... 22 moreg.reflect.Method.invoke(Method.java:498)
- root@die132:/home/LAMPIRIS/dobl# etry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
- at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
- at com.sun.proxy.$Proxy17.create(Unknown Source)
- at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1648)
- ... 22 more
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement