Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- [INFO] Scanning for projects...
- [WARNING]
- [WARNING] Some problems were encountered while building the effective model for org.anahata:anahata:jar:1.0-SNAPSHOT
- [WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-jar-plugin is missing. @ line 125, column 15
- [WARNING]
- [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
- [WARNING]
- [WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
- [WARNING]
- [INFO]
- [INFO] ------------------------------------------------------------------------
- [INFO] Building anahata 1.0-SNAPSHOT
- [INFO] ------------------------------------------------------------------------
- [INFO]
- [INFO] --- maven-resources-plugin:2.4.3:resources (default-resources) @ anahata ---
- [INFO] Using 'UTF-8' encoding to copy filtered resources.
- [INFO] skip non existing resourceDirectory /home/apurv/Desktop/anahad/src/main/resources
- [INFO]
- [INFO] --- maven-compiler-plugin:2.3.2:compile (default-compile) @ anahata ---
- [INFO] Compiling 32 source files to /home/apurv/Desktop/anahad/target/classes
- [INFO]
- [INFO] --- maven-resources-plugin:2.4.3:testResources (default-testResources) @ anahata ---
- [INFO] Using 'UTF-8' encoding to copy filtered resources.
- [INFO] skip non existing resourceDirectory /home/apurv/Desktop/anahad/src/test/resources
- [INFO]
- [INFO] --- maven-compiler-plugin:2.3.2:testCompile (default-testCompile) @ anahata ---
- [INFO] Compiling 3 source files to /home/apurv/Desktop/anahad/target/test-classes
- [INFO]
- [INFO] --- maven-surefire-plugin:2.7.2:test (default-test) @ anahata ---
- [INFO] Surefire report directory: /home/apurv/Desktop/anahad/target/surefire-reports
- -------------------------------------------------------
- T E S T S
- -------------------------------------------------------
- Running org.anahata.play.hadoop.ShowFileStatusTest
- 12/05/20 13:44:02 INFO util.GSet: VM type = 32-bit
- 12/05/20 13:44:02 INFO util.GSet: 2% max memory = 17.7425 MB
- 12/05/20 13:44:02 INFO util.GSet: capacity = 2^22 = 4194304 entries
- 12/05/20 13:44:02 INFO util.GSet: recommended=4194304, actual=4194304
- 12/05/20 13:44:02 INFO namenode.FSNamesystem: fsOwner=apurv
- 12/05/20 13:44:02 INFO namenode.FSNamesystem: supergroup=supergroup
- 12/05/20 13:44:02 INFO namenode.FSNamesystem: isPermissionEnabled=true
- 12/05/20 13:44:02 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
- 12/05/20 13:44:02 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
- 12/05/20 13:44:02 INFO namenode.NameNode: Caching file names occuring more than 10 times
- 12/05/20 13:44:02 INFO common.Storage: Image file of size 111 saved in 0 seconds.
- 12/05/20 13:44:03 INFO common.Storage: Storage directory /tmp/dfs/name1 has been successfully formatted.
- 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 saved in 0 seconds.
- 12/05/20 13:44:03 INFO common.Storage: Storage directory /tmp/dfs/name2 has been successfully formatted.
- 12/05/20 13:44:03 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-namenode.properties, hadoop-metrics2.properties
- 12/05/20 13:44:03 INFO util.GSet: VM type = 32-bit
- 12/05/20 13:44:03 INFO util.GSet: 2% max memory = 17.7425 MB
- 12/05/20 13:44:03 INFO util.GSet: capacity = 2^22 = 4194304 entries
- 12/05/20 13:44:03 INFO util.GSet: recommended=4194304, actual=4194304
- 12/05/20 13:44:03 INFO namenode.FSNamesystem: fsOwner=apurv
- 12/05/20 13:44:03 INFO namenode.FSNamesystem: supergroup=supergroup
- 12/05/20 13:44:03 INFO namenode.FSNamesystem: isPermissionEnabled=true
- 12/05/20 13:44:03 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
- 12/05/20 13:44:03 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
- 12/05/20 13:44:03 INFO namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
- 12/05/20 13:44:03 INFO namenode.NameNode: Caching file names occuring more than 10 times
- 12/05/20 13:44:03 INFO common.Storage: Number of files = 1
- 12/05/20 13:44:03 INFO common.Storage: Number of files under construction = 0
- 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 loaded in 0 seconds.
- 12/05/20 13:44:03 INFO common.Storage: Edits file /tmp/dfs/name1/current/edits of size 4 edits # 0 loaded in 0 seconds.
- 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 saved in 0 seconds.
- 12/05/20 13:44:03 INFO common.Storage: Image file of size 111 saved in 0 seconds.
- 12/05/20 13:44:04 INFO common.Storage: Image file of size 111 saved in 0 seconds.
- 12/05/20 13:44:04 INFO common.Storage: Image file of size 111 saved in 0 seconds.
- 12/05/20 13:44:04 INFO namenode.NameCache: initialized with 0 entries 0 lookups
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: Finished loading FSImage in 913 msecs
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: Total number of blocks = 0
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: Number of invalid blocks = 0
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: Number of under-replicated blocks = 0
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: Number of over-replicated blocks = 0
- 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* Safe mode termination scan for invalid, over- and under-replicated blocks completed in 12 msec
- 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* Leaving safe mode after 0 secs.
- 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes
- 12/05/20 13:44:04 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks
- 12/05/20 13:44:04 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
- 12/05/20 13:44:04 INFO namenode.FSNamesystem: InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
- 12/05/20 13:44:04 INFO ipc.Server: Starting SocketReader
- 12/05/20 13:44:04 INFO namenode.NameNode: Namenode up at: localhost/127.0.0.1:59798
- 12/05/20 13:44:04 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
- 12/05/20 13:44:04 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 12/05/20 13:44:04 INFO http.HttpServer: dfs.webhdfs.enabled = false
- 12/05/20 13:44:04 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
- 12/05/20 13:44:04 INFO http.HttpServer: listener.getLocalPort() returned 48665 webServer.getConnectors()[0].getLocalPort() returned 48665
- 12/05/20 13:44:04 INFO http.HttpServer: Jetty bound to port 48665
- 12/05/20 13:44:04 INFO mortbay.log: jetty-6.1.14
- 12/05/20 13:44:04 INFO mortbay.log: Extract jar:file:/home/apurv/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/hdfs to /tmp/Jetty_localhost_48665_hdfs____.1ih59h/webapp
- 12/05/20 13:44:05 INFO mortbay.log: Started SelectChannelConnector@localhost:48665
- 12/05/20 13:44:05 INFO namenode.NameNode: Web-server up at: localhost:48665
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server Responder: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server listener on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 0 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 1 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 2 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 3 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 4 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 5 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 6 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 7 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 8 on 59798: starting
- 12/05/20 13:44:05 INFO ipc.Server: IPC Server handler 9 on 59798: starting
- Starting DataNode 0 with dfs.data.dir: /tmp/dfs/data/data1,/tmp/dfs/data/data2
- 12/05/20 13:44:05 WARN impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-datanode.properties, hadoop-metrics2.properties
- 12/05/20 13:44:05 WARN util.MBeans: Hadoop:service=DataNode,name=MetricsSystem,sub=Control
- javax.management.InstanceAlreadyExistsException: MXBean already registered with name Hadoop:service=NameNode,name=MetricsSystem,sub=Control
- at com.sun.jmx.mbeanserver.MXBeanLookup.addReference(MXBeanLookup.java:120)
- at com.sun.jmx.mbeanserver.MXBeanSupport.register(MXBeanSupport.java:143)
- at com.sun.jmx.mbeanserver.MBeanSupport.preRegister2(MBeanSupport.java:183)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:941)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:917)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:312)
- at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:482)
- at org.apache.hadoop.metrics2.util.MBeans.register(MBeans.java:56)
- at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.initSystemMBean(MetricsSystemImpl.java:500)
- at org.apache.hadoop.metrics2.impl.MetricsSystemImpl.init(MetricsSystemImpl.java:140)
- at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.init(DefaultMetricsSystem.java:40)
- at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.initialize(DefaultMetricsSystem.java:50)
- at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1491)
- at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1467)
- at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:417)
- at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:280)
- at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:124)
- at org.anahata.play.hadoop.ShowFileStatusTest.setUp(ShowFileStatusTest.java:57)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
- at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
- at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
- at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
- at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
- at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
- at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
- at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
- at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
- at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
- at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
- at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
- at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
- at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
- at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35)
- at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115)
- at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103)
- at $Proxy0.invoke(Unknown Source)
- at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150)
- at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91)
- at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)
- 12/05/20 13:44:05 INFO common.Storage: Storage directory /tmp/dfs/data/data1 is not formatted.
- 12/05/20 13:44:05 INFO common.Storage: Formatting ...
- 12/05/20 13:44:05 INFO common.Storage: Storage directory /tmp/dfs/data/data2 is not formatted.
- 12/05/20 13:44:05 INFO common.Storage: Formatting ...
- 12/05/20 13:44:05 INFO datanode.DataNode: Registered FSDatasetStatusMBean
- 12/05/20 13:44:05 INFO datanode.DataNode: Opened info server at 34276
- 12/05/20 13:44:05 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s
- 12/05/20 13:44:05 INFO http.HttpServer: Added global filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
- 12/05/20 13:44:05 INFO datanode.DataNode: dfs.webhdfs.enabled = false
- 12/05/20 13:44:05 INFO http.HttpServer: Port returned by webServer.getConnectors()[0].getLocalPort() before open() is -1. Opening the listener on 0
- 12/05/20 13:44:05 INFO http.HttpServer: listener.getLocalPort() returned 48133 webServer.getConnectors()[0].getLocalPort() returned 48133
- 12/05/20 13:44:05 INFO http.HttpServer: Jetty bound to port 48133
- 12/05/20 13:44:05 INFO mortbay.log: jetty-6.1.14
- 12/05/20 13:44:05 INFO mortbay.log: Extract jar:file:/home/apurv/.m2/repository/org/apache/hadoop/hadoop-core/1.0.0/hadoop-core-1.0.0.jar!/webapps/datanode to /tmp/Jetty_localhost_48133_datanode____1wzd10/webapp
- 12/05/20 13:44:05 INFO mortbay.log: Started SelectChannelConnector@localhost:48133
- 12/05/20 13:44:05 INFO datanode.DataNode: dnRegistration = DatanodeRegistration(127.0.0.1:34276, storageID=, infoPort=48133, ipcPort=56739)
- 12/05/20 13:44:05 INFO hdfs.StateChange: BLOCK* NameSystem.registerDatanode: node registration from 127.0.0.1:34276 storage DS-1332539920-127.0.1.1-34276-1337501645976
- 12/05/20 13:44:05 INFO net.NetworkTopology: Adding a new node: /default-rack/127.0.0.1:34276
- 12/05/20 13:44:05 INFO ipc.Server: Starting SocketReader
- 12/05/20 13:44:06 INFO datanode.DataNode: New storage id DS-1332539920-127.0.1.1-34276-1337501645976 is assigned to data-node 127.0.0.1:34276
- 12/05/20 13:44:06 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:34276, storageID=DS-1332539920-127.0.1.1-34276-1337501645976, infoPort=48133, ipcPort=56739)In DataNode.run, data = FSDataset{dirpath='/tmp/dfs/data/data1/current,/tmp/dfs/data/data2/current'}
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server Responder: starting
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server listener on 56739: starting
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 0 on 56739: starting
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 1 on 56739: starting
- 12/05/20 13:44:06 INFO datanode.DataNode: using BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 2 on 56739: starting
- 12/05/20 13:44:06 INFO hdfs.StateChange: *BLOCK* NameSystem.processReport: from 127.0.0.1:34276, blocks: 0, processing time: 1 msecs
- 12/05/20 13:44:06 INFO datanode.DataNode: BlockReport of 0 blocks took 0 msec to generate and 23 msecs for RPC and NN processing
- 12/05/20 13:44:06 INFO datanode.DataNode: Starting Periodic block scanner.
- 12/05/20 13:44:06 ERROR security.UserGroupInformation: PriviledgedActionException as:apurv cause:java.io.IOException: java.lang.NoClassDefFoundError: javax/ws/rs/core/StreamingOutput
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 8 on 59798, call create(/dir/file, rwxr-xr-x, DFSClient_1676392726, true, true, 1, 67108864) from 127.0.0.1:47240: error: java.io.IOException: java.lang.NoClassDefFoundError: javax/ws/rs/core/StreamingOutput
- java.io.IOException: java.lang.NoClassDefFoundError: javax/ws/rs/core/StreamingOutput
- at org.apache.hadoop.hdfs.server.namenode.NameNode.getClientMachine(NameNode.java:589)
- at org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:619)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
- at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
- at java.security.AccessController.doPrivileged(Native Method)
- at javax.security.auth.Subject.doAs(Subject.java:396)
- at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
- at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
- Shutting down the Mini HDFS Cluster
- Shutting down DataNode 0
- 12/05/20 13:44:06 INFO ipc.Server: Stopping server on 56739
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 0 on 56739: exiting
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 1 on 56739: exiting
- 12/05/20 13:44:06 INFO ipc.Server: IPC Server handler 2 on 56739: exiting
- 12/05/20 13:44:06 INFO ipc.Server: Stopping IPC Server listener on 56739
- 12/05/20 13:44:06 INFO ipc.Server: Stopping IPC Server Responder
- 12/05/20 13:44:06 INFO metrics.RpcInstrumentation: shut down
- 12/05/20 13:44:06 WARN datanode.DataNode: DatanodeRegistration(127.0.0.1:34276, storageID=DS-1332539920-127.0.1.1-34276-1337501645976, infoPort=48133, ipcPort=56739):DataXceiveServer:java.nio.channels.AsynchronousCloseException
- at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185)
- at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159)
- at sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84)
- at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131)
- at java.lang.Thread.run(Thread.java:662)
- 12/05/20 13:44:06 INFO datanode.DataNode: Exiting DataXceiveServer
- 12/05/20 13:44:06 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
- 12/05/20 13:44:06 INFO datanode.DataBlockScanner: Exiting DataBlockScanner thread.
- 12/05/20 13:44:06 INFO datanode.DataNode: DatanodeRegistration(127.0.0.1:34276, storageID=DS-1332539920-127.0.1.1-34276-1337501645976, infoPort=48133, ipcPort=56739):Finishing DataNode in: FSDataset{dirpath='/tmp/dfs/data/data1/current,/tmp/dfs/data/data2/current'}
- 12/05/20 13:44:06 WARN util.MBeans: Hadoop:service=DataNode,name=DataNodeInfo
- javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=DataNodeInfo
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403)
- at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506)
- at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
- at org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513)
- at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726)
- at org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442)
- at java.lang.Thread.run(Thread.java:662)
- 12/05/20 13:44:06 INFO ipc.Server: Stopping server on 56739
- 12/05/20 13:44:06 INFO metrics.RpcInstrumentation: shut down
- 12/05/20 13:44:06 INFO datanode.DataNode: Waiting for threadgroup to exit, active threads is 0
- 12/05/20 13:44:06 INFO datanode.FSDatasetAsyncDiskService: Shutting down all async disk service threads...
- 12/05/20 13:44:06 INFO datanode.FSDatasetAsyncDiskService: All async disk service threads have been shut down.
- 12/05/20 13:44:06 WARN util.MBeans: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2044648328
- javax.management.InstanceNotFoundException: Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId2044648328
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415)
- at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403)
- at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506)
- at org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71)
- at org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934)
- at org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788)
- at org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566)
- at org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550)
- at org.anahata.play.hadoop.ShowFileStatusTest.tearDown(ShowFileStatusTest.java:71)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
- at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
- at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
- at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:37)
- at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
- at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
- at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
- at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
- at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
- at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
- at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
- at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
- at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
- at org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:35)
- at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:115)
- at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:97)
- at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
- at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
- at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
- at java.lang.reflect.Method.invoke(Method.java:597)
- at org.apache.maven.surefire.booter.ProviderFactory$ClassLoaderProxy.invoke(ProviderFactory.java:103)
- at $Proxy0.invoke(Unknown Source)
- at org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:150)
- at org.apache.maven.surefire.booter.SurefireStarter.runSuitesInProcess(SurefireStarter.java:91)
- at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:69)
- 12/05/20 13:44:06 WARN datanode.FSDatasetAsyncDiskService: AsyncDiskService has already shut down.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement